Jul 14 22:41:20.985608 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 14 20:42:36 -00 2025 Jul 14 22:41:20.985639 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:41:20.985651 kernel: BIOS-provided physical RAM map: Jul 14 22:41:20.985659 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 22:41:20.985666 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 22:41:20.985673 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 22:41:20.985682 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 22:41:20.985690 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 22:41:20.985697 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 14 22:41:20.985706 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 14 22:41:20.985713 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 14 22:41:20.985721 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 14 22:41:20.985728 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 14 22:41:20.985736 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 22:41:20.985745 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 14 22:41:20.985755 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 14 22:41:20.985763 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 22:41:20.985771 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:41:20.985779 kernel: NX (Execute Disable) protection: active Jul 14 22:41:20.985786 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 14 22:41:20.985795 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 14 22:41:20.985802 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 14 22:41:20.985810 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 14 22:41:20.985818 kernel: extended physical RAM map: Jul 14 22:41:20.985825 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 22:41:20.985835 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 22:41:20.985843 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 22:41:20.985851 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 22:41:20.985859 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 22:41:20.985866 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 14 22:41:20.985874 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 14 22:41:20.985882 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Jul 14 22:41:20.985890 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Jul 14 22:41:20.985898 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Jul 14 22:41:20.985905 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Jul 14 22:41:20.985913 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Jul 14 22:41:20.985923 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 14 22:41:20.985931 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 14 22:41:20.985939 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 22:41:20.985947 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 14 22:41:20.985969 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 14 22:41:20.985979 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 22:41:20.985987 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:41:20.985997 kernel: efi: EFI v2.70 by EDK II Jul 14 22:41:20.986005 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Jul 14 22:41:20.986014 kernel: random: crng init done Jul 14 22:41:20.986023 kernel: SMBIOS 2.8 present. Jul 14 22:41:20.986031 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 14 22:41:20.986039 kernel: Hypervisor detected: KVM Jul 14 22:41:20.986048 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:41:20.986056 kernel: kvm-clock: cpu 0, msr 6419b001, primary cpu clock Jul 14 22:41:20.986064 kernel: kvm-clock: using sched offset of 5269527394 cycles Jul 14 22:41:20.986075 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:41:20.986084 kernel: tsc: Detected 2794.750 MHz processor Jul 14 22:41:20.986093 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:41:20.986102 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:41:20.986110 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 14 22:41:20.986119 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:41:20.986128 kernel: Using GB pages for direct mapping Jul 14 22:41:20.986137 kernel: Secure boot disabled Jul 14 22:41:20.986145 kernel: ACPI: Early table checksum verification disabled Jul 14 22:41:20.986156 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 14 22:41:20.986164 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 14 22:41:20.986173 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986182 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986190 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 14 22:41:20.986199 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986208 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986216 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986225 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:41:20.986236 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 14 22:41:20.986244 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 14 22:41:20.986253 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 14 22:41:20.986262 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 14 22:41:20.986270 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 14 22:41:20.986279 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 14 22:41:20.986288 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 14 22:41:20.986296 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 14 22:41:20.986305 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 14 22:41:20.986315 kernel: No NUMA configuration found Jul 14 22:41:20.986324 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 14 22:41:20.986332 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 14 22:41:20.986341 kernel: Zone ranges: Jul 14 22:41:20.986350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:41:20.986359 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 14 22:41:20.986367 kernel: Normal empty Jul 14 22:41:20.986376 kernel: Movable zone start for each node Jul 14 22:41:20.986384 kernel: Early memory node ranges Jul 14 22:41:20.986395 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 14 22:41:20.986403 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 14 22:41:20.986412 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 14 22:41:20.986420 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 14 22:41:20.986429 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 14 22:41:20.986438 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 14 22:41:20.986446 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 14 22:41:20.986455 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:41:20.986464 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 14 22:41:20.986472 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 14 22:41:20.986482 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:41:20.986491 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 14 22:41:20.986500 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 14 22:41:20.986509 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 14 22:41:20.986517 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:41:20.986526 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:41:20.986534 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:41:20.986543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:41:20.986552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:41:20.986563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:41:20.986571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:41:20.986580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:41:20.986589 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:41:20.986597 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:41:20.986606 kernel: TSC deadline timer available Jul 14 22:41:20.986622 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:41:20.986631 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:41:20.986640 kernel: kvm-guest: setup PV sched yield Jul 14 22:41:20.986651 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 14 22:41:20.986660 kernel: Booting paravirtualized kernel on KVM Jul 14 22:41:20.986674 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:41:20.986685 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:41:20.986694 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 14 22:41:20.986703 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 14 22:41:20.986712 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:41:20.986721 kernel: kvm-guest: setup async PF for cpu 0 Jul 14 22:41:20.986730 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Jul 14 22:41:20.986739 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:41:20.986748 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:41:20.986757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 14 22:41:20.986767 kernel: Policy zone: DMA32 Jul 14 22:41:20.986778 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:41:20.986788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:41:20.986797 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:41:20.986807 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:41:20.986817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:41:20.986826 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 169308K reserved, 0K cma-reserved) Jul 14 22:41:20.986836 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:41:20.986845 kernel: ftrace: allocating 34607 entries in 136 pages Jul 14 22:41:20.986854 kernel: ftrace: allocated 136 pages with 2 groups Jul 14 22:41:20.986863 kernel: rcu: Hierarchical RCU implementation. Jul 14 22:41:20.986872 kernel: rcu: RCU event tracing is enabled. Jul 14 22:41:20.986882 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:41:20.986893 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:41:20.986902 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:41:20.986911 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:41:20.986920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:41:20.986930 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:41:20.986939 kernel: Console: colour dummy device 80x25 Jul 14 22:41:20.986948 kernel: printk: console [ttyS0] enabled Jul 14 22:41:20.986957 kernel: ACPI: Core revision 20210730 Jul 14 22:41:20.986977 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:41:20.986988 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:41:20.986997 kernel: x2apic enabled Jul 14 22:41:20.987006 kernel: Switched APIC routing to physical x2apic. Jul 14 22:41:20.987015 kernel: kvm-guest: setup PV IPIs Jul 14 22:41:20.987024 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:41:20.987033 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:41:20.987043 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 14 22:41:20.987052 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:41:20.987061 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:41:20.987072 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:41:20.987081 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:41:20.987090 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:41:20.987099 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:41:20.987108 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:41:20.987117 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:41:20.987126 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:41:20.987136 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 14 22:41:20.987145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:41:20.987156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:41:20.987165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:41:20.987174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:41:20.987183 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 14 22:41:20.987192 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:41:20.987201 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:41:20.987210 kernel: LSM: Security Framework initializing Jul 14 22:41:20.987219 kernel: SELinux: Initializing. Jul 14 22:41:20.987229 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:41:20.987240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:41:20.987249 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:41:20.987258 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:41:20.987267 kernel: ... version: 0 Jul 14 22:41:20.987276 kernel: ... bit width: 48 Jul 14 22:41:20.987285 kernel: ... generic registers: 6 Jul 14 22:41:20.987294 kernel: ... value mask: 0000ffffffffffff Jul 14 22:41:20.987303 kernel: ... max period: 00007fffffffffff Jul 14 22:41:20.987312 kernel: ... fixed-purpose events: 0 Jul 14 22:41:20.987323 kernel: ... event mask: 000000000000003f Jul 14 22:41:20.987332 kernel: signal: max sigframe size: 1776 Jul 14 22:41:20.987341 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:41:20.987350 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:41:20.987359 kernel: x86: Booting SMP configuration: Jul 14 22:41:20.987368 kernel: .... node #0, CPUs: #1 Jul 14 22:41:20.987377 kernel: kvm-clock: cpu 1, msr 6419b041, secondary cpu clock Jul 14 22:41:20.987386 kernel: kvm-guest: setup async PF for cpu 1 Jul 14 22:41:20.987395 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Jul 14 22:41:20.987406 kernel: #2 Jul 14 22:41:20.987415 kernel: kvm-clock: cpu 2, msr 6419b081, secondary cpu clock Jul 14 22:41:20.987424 kernel: kvm-guest: setup async PF for cpu 2 Jul 14 22:41:20.987433 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Jul 14 22:41:20.987442 kernel: #3 Jul 14 22:41:20.987451 kernel: kvm-clock: cpu 3, msr 6419b0c1, secondary cpu clock Jul 14 22:41:20.987460 kernel: kvm-guest: setup async PF for cpu 3 Jul 14 22:41:20.987469 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Jul 14 22:41:20.987478 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:41:20.987487 kernel: smpboot: Max logical packages: 1 Jul 14 22:41:20.987498 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 14 22:41:20.987507 kernel: devtmpfs: initialized Jul 14 22:41:20.987516 kernel: x86/mm: Memory block size: 128MB Jul 14 22:41:20.987525 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 14 22:41:20.987535 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 14 22:41:20.987544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 14 22:41:20.987553 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 14 22:41:20.987562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 14 22:41:20.987573 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:41:20.987582 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:41:20.987592 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:41:20.987601 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:41:20.987610 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:41:20.987628 kernel: audit: type=2000 audit(1752532879.800:1): state=initialized audit_enabled=0 res=1 Jul 14 22:41:20.987637 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:41:20.987646 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:41:20.987655 kernel: cpuidle: using governor menu Jul 14 22:41:20.987666 kernel: ACPI: bus type PCI registered Jul 14 22:41:20.987675 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:41:20.987684 kernel: dca service started, version 1.12.1 Jul 14 22:41:20.987694 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:41:20.987703 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 14 22:41:20.987712 kernel: PCI: Using configuration type 1 for base access Jul 14 22:41:20.987722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:41:20.987731 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:41:20.987740 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:41:20.987750 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:41:20.987760 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:41:20.987769 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:41:20.987778 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 22:41:20.987787 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 22:41:20.987796 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 22:41:20.987805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:41:20.987814 kernel: ACPI: Interpreter enabled Jul 14 22:41:20.987823 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:41:20.987832 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:41:20.987843 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:41:20.987852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:41:20.987861 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:41:20.988014 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:41:20.988163 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:41:20.988257 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:41:20.988271 kernel: PCI host bridge to bus 0000:00 Jul 14 22:41:20.988370 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:41:20.988455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:41:20.988536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:41:20.988629 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:41:20.988715 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:41:20.988798 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 14 22:41:20.989386 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:41:20.989505 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:41:20.989627 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:41:20.989726 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 14 22:41:20.989821 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 14 22:41:20.989917 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 14 22:41:20.990100 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 14 22:41:20.990201 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:41:20.990306 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:41:20.990406 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 14 22:41:20.990501 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 14 22:41:20.990602 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 14 22:41:20.990719 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:41:20.990816 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 14 22:41:20.990923 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 14 22:41:20.991036 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 14 22:41:20.991141 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:41:20.991237 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 14 22:41:20.991334 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 14 22:41:20.991429 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 14 22:41:20.991523 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 14 22:41:20.991637 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:41:20.991733 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:41:20.991835 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:41:20.991930 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 14 22:41:20.992037 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 14 22:41:20.992140 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:41:20.992236 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 14 22:41:20.992250 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:41:20.992260 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:41:20.992269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:41:20.992279 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:41:20.992288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:41:20.992297 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:41:20.992306 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:41:20.992315 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:41:20.992327 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:41:20.992336 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:41:20.992345 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:41:20.992355 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:41:20.992364 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:41:20.992373 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:41:20.992382 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:41:20.992391 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:41:20.992400 kernel: iommu: Default domain type: Translated Jul 14 22:41:20.992411 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:41:20.992505 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:41:20.992599 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:41:20.992707 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:41:20.992721 kernel: vgaarb: loaded Jul 14 22:41:20.992730 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 22:41:20.992740 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 22:41:20.992749 kernel: PTP clock support registered Jul 14 22:41:20.992761 kernel: Registered efivars operations Jul 14 22:41:20.992770 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:41:20.992779 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:41:20.992788 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 14 22:41:20.992797 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 14 22:41:20.992806 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Jul 14 22:41:20.992815 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Jul 14 22:41:20.992825 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 14 22:41:20.992834 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 14 22:41:20.992845 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:41:20.992854 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:41:20.992864 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:41:20.992873 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:41:20.992882 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:41:20.992892 kernel: pnp: PnP ACPI init Jul 14 22:41:20.993012 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:41:20.993027 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:41:20.993039 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:41:20.993049 kernel: NET: Registered PF_INET protocol family Jul 14 22:41:20.993058 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:41:20.993067 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:41:20.993077 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:41:20.993086 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:41:20.993095 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 22:41:20.993105 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:41:20.993114 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:41:20.993125 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:41:20.993135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:41:20.993144 kernel: NET: Registered PF_XDP protocol family Jul 14 22:41:20.993240 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 14 22:41:20.993390 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 14 22:41:20.993491 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:41:20.993570 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:41:20.993657 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:41:20.993740 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:41:20.993817 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:41:20.993893 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 14 22:41:20.993906 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:41:20.993916 kernel: Initialise system trusted keyrings Jul 14 22:41:20.993926 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:41:20.993936 kernel: Key type asymmetric registered Jul 14 22:41:20.993946 kernel: Asymmetric key parser 'x509' registered Jul 14 22:41:20.993956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 22:41:20.994001 kernel: io scheduler mq-deadline registered Jul 14 22:41:20.994012 kernel: io scheduler kyber registered Jul 14 22:41:20.994032 kernel: io scheduler bfq registered Jul 14 22:41:20.994044 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:41:20.994055 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:41:20.994065 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:41:20.994076 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:41:20.994086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:41:20.994096 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:41:20.994108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:41:20.994119 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:41:20.994133 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:41:20.994252 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:41:20.994269 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:41:20.994346 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:41:20.994423 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:41:20 UTC (1752532880) Jul 14 22:41:20.994507 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:41:20.994524 kernel: efifb: probing for efifb Jul 14 22:41:20.994534 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 14 22:41:20.994544 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 14 22:41:20.994553 kernel: efifb: scrolling: redraw Jul 14 22:41:20.994563 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 14 22:41:20.994573 kernel: Console: switching to colour frame buffer device 160x50 Jul 14 22:41:20.994582 kernel: fb0: EFI VGA frame buffer device Jul 14 22:41:20.994592 kernel: pstore: Registered efi as persistent store backend Jul 14 22:41:20.994601 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:41:20.994621 kernel: Segment Routing with IPv6 Jul 14 22:41:20.994631 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:41:20.994641 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:41:20.994652 kernel: Key type dns_resolver registered Jul 14 22:41:20.994662 kernel: IPI shorthand broadcast: enabled Jul 14 22:41:20.994672 kernel: sched_clock: Marking stable (476262755, 188508309)->(828489882, -163718818) Jul 14 22:41:20.994683 kernel: registered taskstats version 1 Jul 14 22:41:20.994693 kernel: Loading compiled-in X.509 certificates Jul 14 22:41:20.994703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 14a6940dcbc00bab0c83ae71c4abeb315720716d' Jul 14 22:41:20.994712 kernel: Key type .fscrypt registered Jul 14 22:41:20.994723 kernel: Key type fscrypt-provisioning registered Jul 14 22:41:20.994733 kernel: pstore: Using crash dump compression: deflate Jul 14 22:41:20.994743 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:41:20.994752 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:41:20.994763 kernel: ima: No architecture policies found Jul 14 22:41:20.994773 kernel: clk: Disabling unused clocks Jul 14 22:41:20.994783 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 14 22:41:20.994792 kernel: Write protecting the kernel read-only data: 28672k Jul 14 22:41:20.994802 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 14 22:41:20.994811 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 14 22:41:20.994821 kernel: Run /init as init process Jul 14 22:41:20.994830 kernel: with arguments: Jul 14 22:41:20.994840 kernel: /init Jul 14 22:41:20.994851 kernel: with environment: Jul 14 22:41:20.994861 kernel: HOME=/ Jul 14 22:41:20.994870 kernel: TERM=linux Jul 14 22:41:20.994879 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:41:20.994891 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:41:20.994903 systemd[1]: Detected virtualization kvm. Jul 14 22:41:20.994914 systemd[1]: Detected architecture x86-64. Jul 14 22:41:20.994924 systemd[1]: Running in initrd. Jul 14 22:41:20.994936 systemd[1]: No hostname configured, using default hostname. Jul 14 22:41:20.994945 systemd[1]: Hostname set to . Jul 14 22:41:20.994956 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:41:20.994978 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:41:20.994989 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:41:20.994999 systemd[1]: Reached target cryptsetup.target. Jul 14 22:41:20.995009 systemd[1]: Reached target paths.target. Jul 14 22:41:20.995019 systemd[1]: Reached target slices.target. Jul 14 22:41:20.995030 systemd[1]: Reached target swap.target. Jul 14 22:41:20.995041 systemd[1]: Reached target timers.target. Jul 14 22:41:20.995052 systemd[1]: Listening on iscsid.socket. Jul 14 22:41:20.995062 systemd[1]: Listening on iscsiuio.socket. Jul 14 22:41:20.995072 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 22:41:20.995082 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 22:41:20.995093 systemd[1]: Listening on systemd-journald.socket. Jul 14 22:41:20.995103 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:41:20.995115 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:41:20.995125 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:41:20.995136 systemd[1]: Reached target sockets.target. Jul 14 22:41:20.995146 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:41:20.995156 systemd[1]: Finished network-cleanup.service. Jul 14 22:41:20.995166 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:41:20.995176 systemd[1]: Starting systemd-journald.service... Jul 14 22:41:20.995186 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:41:20.995200 systemd[1]: Starting systemd-resolved.service... Jul 14 22:41:20.995216 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 22:41:20.995227 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:41:20.995239 kernel: audit: type=1130 audit(1752532880.984:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:20.995250 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:41:20.995261 kernel: audit: type=1130 audit(1752532880.988:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:20.995284 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 22:41:20.995298 systemd-journald[197]: Journal started Jul 14 22:41:20.995363 systemd-journald[197]: Runtime Journal (/run/log/journal/038425e46b4644daab47ebf9a0f33273) is 6.0M, max 48.4M, 42.4M free. Jul 14 22:41:20.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:20.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:20.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:20.997989 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 22:41:20.998011 kernel: audit: type=1130 audit(1752532880.995:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.000991 systemd-modules-load[198]: Inserted module 'overlay' Jul 14 22:41:21.070173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 22:41:21.070196 systemd[1]: Started systemd-journald.service. Jul 14 22:41:21.070209 kernel: audit: type=1130 audit(1752532881.068:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.079850 systemd-resolved[199]: Positive Trust Anchors: Jul 14 22:41:21.079868 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:41:21.079899 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:41:21.082253 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 14 22:41:21.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.083546 systemd[1]: Started systemd-resolved.service. Jul 14 22:41:21.087590 kernel: audit: type=1130 audit(1752532881.082:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.083931 systemd[1]: Reached target nss-lookup.target. Jul 14 22:41:21.089704 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 22:41:21.095078 kernel: audit: type=1130 audit(1752532881.089:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.093412 systemd[1]: Starting dracut-cmdline.service... Jul 14 22:41:21.095196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 22:41:21.101161 kernel: audit: type=1130 audit(1752532881.096:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.104712 dracut-cmdline[214]: dracut-dracut-053 Jul 14 22:41:21.106091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:41:21.107433 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:41:21.124904 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 14 22:41:21.125921 kernel: Bridge firewalling registered Jul 14 22:41:21.144003 kernel: SCSI subsystem initialized Jul 14 22:41:21.193173 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:41:21.193241 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:41:21.193252 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 22:41:21.194443 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:41:21.198224 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 14 22:41:21.199094 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:41:21.203921 kernel: audit: type=1130 audit(1752532881.198:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.200212 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:41:21.209339 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:41:21.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.213981 kernel: audit: type=1130 audit(1752532881.209:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.219984 kernel: iscsi: registered transport (tcp) Jul 14 22:41:21.305207 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:41:21.305252 kernel: QLogic iSCSI HBA Driver Jul 14 22:41:21.330511 systemd[1]: Finished dracut-cmdline.service. Jul 14 22:41:21.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.331542 systemd[1]: Starting dracut-pre-udev.service... Jul 14 22:41:21.380001 kernel: raid6: avx2x4 gen() 19475 MB/s Jul 14 22:41:21.396992 kernel: raid6: avx2x4 xor() 7267 MB/s Jul 14 22:41:21.413986 kernel: raid6: avx2x2 gen() 29784 MB/s Jul 14 22:41:21.496008 kernel: raid6: avx2x2 xor() 18347 MB/s Jul 14 22:41:21.513010 kernel: raid6: avx2x1 gen() 17815 MB/s Jul 14 22:41:21.570002 kernel: raid6: avx2x1 xor() 12248 MB/s Jul 14 22:41:21.587016 kernel: raid6: sse2x4 gen() 10519 MB/s Jul 14 22:41:21.614012 kernel: raid6: sse2x4 xor() 6788 MB/s Jul 14 22:41:21.631006 kernel: raid6: sse2x2 gen() 14348 MB/s Jul 14 22:41:21.669994 kernel: raid6: sse2x2 xor() 7189 MB/s Jul 14 22:41:21.726991 kernel: raid6: sse2x1 gen() 11630 MB/s Jul 14 22:41:21.782449 kernel: raid6: sse2x1 xor() 7469 MB/s Jul 14 22:41:21.782505 kernel: raid6: using algorithm avx2x2 gen() 29784 MB/s Jul 14 22:41:21.782518 kernel: raid6: .... xor() 18347 MB/s, rmw enabled Jul 14 22:41:21.782529 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:41:21.794987 kernel: xor: automatically using best checksumming function avx Jul 14 22:41:21.887001 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 14 22:41:21.895373 systemd[1]: Finished dracut-pre-udev.service. Jul 14 22:41:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.919000 audit: BPF prog-id=7 op=LOAD Jul 14 22:41:21.919000 audit: BPF prog-id=8 op=LOAD Jul 14 22:41:21.921339 systemd[1]: Starting systemd-udevd.service... Jul 14 22:41:21.933826 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 14 22:41:21.937894 systemd[1]: Started systemd-udevd.service. Jul 14 22:41:21.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.941070 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 22:41:21.952810 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jul 14 22:41:21.980351 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 22:41:21.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:21.981517 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:41:22.017437 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:41:22.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:22.040994 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:41:22.074023 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:41:22.074039 kernel: GPT:9289727 != 19775487 Jul 14 22:41:22.074048 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:41:22.074057 kernel: GPT:9289727 != 19775487 Jul 14 22:41:22.074065 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:41:22.074074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:41:22.074090 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:41:22.075980 kernel: libata version 3.00 loaded. Jul 14 22:41:22.083109 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:41:22.083259 kernel: AES CTR mode by8 optimization enabled Jul 14 22:41:22.084991 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:41:22.126233 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:41:22.126258 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:41:22.126394 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:41:22.126493 kernel: scsi host0: ahci Jul 14 22:41:22.126658 kernel: scsi host1: ahci Jul 14 22:41:22.126779 kernel: scsi host2: ahci Jul 14 22:41:22.126897 kernel: scsi host3: ahci Jul 14 22:41:22.127022 kernel: scsi host4: ahci Jul 14 22:41:22.127133 kernel: scsi host5: ahci Jul 14 22:41:22.127244 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 14 22:41:22.127261 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 14 22:41:22.127272 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 14 22:41:22.127283 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 14 22:41:22.127294 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 14 22:41:22.127306 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 14 22:41:22.110371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 22:41:22.141170 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 22:41:22.154197 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 22:41:22.159129 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (448) Jul 14 22:41:22.156109 systemd[1]: Starting disk-uuid.service... Jul 14 22:41:22.162719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 22:41:22.172621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:41:22.432931 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:41:22.433023 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:41:22.439984 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:41:22.442907 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:41:22.443005 kernel: ata3.00: applying bridge limits Jul 14 22:41:22.443016 kernel: ata3.00: configured for UDMA/100 Jul 14 22:41:22.443025 disk-uuid[527]: Primary Header is updated. Jul 14 22:41:22.443025 disk-uuid[527]: Secondary Entries is updated. Jul 14 22:41:22.443025 disk-uuid[527]: Secondary Header is updated. Jul 14 22:41:22.477585 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:41:22.477787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:41:22.477804 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:41:22.477813 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:41:22.477822 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:41:22.477831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:41:22.485002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:41:22.537842 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:41:22.559763 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:41:22.559779 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:41:23.502911 disk-uuid[528]: The operation has completed successfully. Jul 14 22:41:23.504612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:41:23.526176 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:41:23.526322 systemd[1]: Finished disk-uuid.service. Jul 14 22:41:23.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:23.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:23.552635 systemd[1]: Starting verity-setup.service... Jul 14 22:41:23.569998 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:41:23.595113 systemd[1]: Found device dev-mapper-usr.device. Jul 14 22:41:23.598720 systemd[1]: Mounting sysusr-usr.mount... Jul 14 22:41:23.601351 systemd[1]: Finished verity-setup.service. Jul 14 22:41:23.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:23.691022 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 22:41:23.691655 systemd[1]: Mounted sysusr-usr.mount. Jul 14 22:41:23.692663 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 22:41:23.693577 systemd[1]: Starting ignition-setup.service... Jul 14 22:41:23.696195 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 22:41:23.704756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:41:23.704816 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:41:23.704827 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:41:23.719235 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:41:23.758780 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 22:41:23.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:23.813000 audit: BPF prog-id=9 op=LOAD Jul 14 22:41:23.815128 systemd[1]: Starting systemd-networkd.service... Jul 14 22:41:23.838557 systemd-networkd[708]: lo: Link UP Jul 14 22:41:23.838570 systemd-networkd[708]: lo: Gained carrier Jul 14 22:41:23.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:23.839056 systemd-networkd[708]: Enumeration completed Jul 14 22:41:23.839138 systemd[1]: Started systemd-networkd.service. Jul 14 22:41:23.839243 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:41:23.926644 systemd[1]: Reached target network.target. Jul 14 22:41:23.927490 systemd-networkd[708]: eth0: Link UP Jul 14 22:41:23.927496 systemd-networkd[708]: eth0: Gained carrier Jul 14 22:41:23.929335 systemd[1]: Starting iscsiuio.service... Jul 14 22:41:23.947320 systemd[1]: Started iscsiuio.service. Jul 14 22:41:23.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.007565 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:41:24.007653 systemd[1]: Starting iscsid.service... Jul 14 22:41:24.013585 iscsid[713]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:41:24.013585 iscsid[713]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 22:41:24.013585 iscsid[713]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 22:41:24.013585 iscsid[713]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 22:41:24.013585 iscsid[713]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:41:24.013585 iscsid[713]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 22:41:24.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.014662 systemd[1]: Started iscsid.service. Jul 14 22:41:24.016462 systemd[1]: Starting dracut-initqueue.service... Jul 14 22:41:24.033848 systemd[1]: Finished dracut-initqueue.service. Jul 14 22:41:24.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.035904 systemd[1]: Reached target remote-fs-pre.target. Jul 14 22:41:24.036375 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:41:24.037842 systemd[1]: Reached target remote-fs.target. Jul 14 22:41:24.148635 systemd[1]: Starting dracut-pre-mount.service... Jul 14 22:41:24.158146 systemd[1]: Finished dracut-pre-mount.service. Jul 14 22:41:24.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.620359 systemd[1]: Finished ignition-setup.service. Jul 14 22:41:24.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.696513 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 22:41:24.768541 ignition[728]: Ignition 2.14.0 Jul 14 22:41:24.768552 ignition[728]: Stage: fetch-offline Jul 14 22:41:24.768598 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:24.768606 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:24.768701 ignition[728]: parsed url from cmdline: "" Jul 14 22:41:24.768704 ignition[728]: no config URL provided Jul 14 22:41:24.768708 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:41:24.768714 ignition[728]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:41:24.768731 ignition[728]: op(1): [started] loading QEMU firmware config module Jul 14 22:41:24.768738 ignition[728]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:41:24.863673 ignition[728]: op(1): [finished] loading QEMU firmware config module Jul 14 22:41:24.901398 ignition[728]: parsing config with SHA512: 8d30294b3dce183f7f3e55f6a61a82b1de000900426ac21129efecf5dd4e01b474ccf5b78d5000c6e0268860b878c6d00e9d289f048b76e80d7c47a20712a6ed Jul 14 22:41:24.906599 unknown[728]: fetched base config from "system" Jul 14 22:41:24.906609 unknown[728]: fetched user config from "qemu" Jul 14 22:41:24.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:24.907065 ignition[728]: fetch-offline: fetch-offline passed Jul 14 22:41:24.996099 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 22:41:24.907108 ignition[728]: Ignition finished successfully Jul 14 22:41:24.997251 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:41:24.997995 systemd[1]: Starting ignition-kargs.service... Jul 14 22:41:25.006113 ignition[737]: Ignition 2.14.0 Jul 14 22:41:25.006123 ignition[737]: Stage: kargs Jul 14 22:41:25.006212 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:25.006221 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:25.008461 systemd[1]: Finished ignition-kargs.service. Jul 14 22:41:25.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.007157 ignition[737]: kargs: kargs passed Jul 14 22:41:25.010894 systemd[1]: Starting ignition-disks.service... Jul 14 22:41:25.007192 ignition[737]: Ignition finished successfully Jul 14 22:41:25.016461 ignition[743]: Ignition 2.14.0 Jul 14 22:41:25.016471 ignition[743]: Stage: disks Jul 14 22:41:25.016567 ignition[743]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:25.016576 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:25.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.018329 systemd[1]: Finished ignition-disks.service. Jul 14 22:41:25.017581 ignition[743]: disks: disks passed Jul 14 22:41:25.019456 systemd[1]: Reached target initrd-root-device.target. Jul 14 22:41:25.017615 ignition[743]: Ignition finished successfully Jul 14 22:41:25.021244 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:41:25.022062 systemd[1]: Reached target local-fs.target. Jul 14 22:41:25.023527 systemd[1]: Reached target sysinit.target. Jul 14 22:41:25.023921 systemd[1]: Reached target basic.target. Jul 14 22:41:25.024940 systemd[1]: Starting systemd-fsck-root.service... Jul 14 22:41:25.035090 systemd-fsck[751]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 14 22:41:25.297219 systemd-networkd[708]: eth0: Gained IPv6LL Jul 14 22:41:25.657637 systemd[1]: Finished systemd-fsck-root.service. Jul 14 22:41:25.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.660635 systemd[1]: Mounting sysroot.mount... Jul 14 22:41:25.718117 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 14 22:41:25.718147 kernel: audit: type=1130 audit(1752532885.658:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.813993 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 22:41:25.814259 systemd[1]: Mounted sysroot.mount. Jul 14 22:41:25.815946 systemd[1]: Reached target initrd-root-fs.target. Jul 14 22:41:25.818977 systemd[1]: Mounting sysroot-usr.mount... Jul 14 22:41:25.820834 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 22:41:25.820883 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:41:25.822393 systemd[1]: Reached target ignition-diskful.target. Jul 14 22:41:25.826338 systemd[1]: Mounted sysroot-usr.mount. Jul 14 22:41:25.828981 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 22:41:25.865105 systemd[1]: Starting initrd-setup-root.service... Jul 14 22:41:25.869363 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:41:25.871105 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (757) Jul 14 22:41:25.873319 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:41:25.875211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:41:25.875226 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:41:25.875235 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:41:25.877432 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:41:25.878419 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 22:41:25.929289 initrd-setup-root[804]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:41:25.957385 systemd[1]: Finished initrd-setup-root.service. Jul 14 22:41:25.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.959810 systemd[1]: Starting ignition-mount.service... Jul 14 22:41:25.963163 kernel: audit: type=1130 audit(1752532885.959:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.962734 systemd[1]: Starting sysroot-boot.service... Jul 14 22:41:25.968916 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 14 22:41:25.969030 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 14 22:41:25.982547 systemd[1]: Finished sysroot-boot.service. Jul 14 22:41:25.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:25.987997 kernel: audit: type=1130 audit(1752532885.983:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:26.056833 ignition[826]: INFO : Ignition 2.14.0 Jul 14 22:41:26.056833 ignition[826]: INFO : Stage: mount Jul 14 22:41:26.059905 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:26.059905 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:26.059905 ignition[826]: INFO : mount: mount passed Jul 14 22:41:26.059905 ignition[826]: INFO : Ignition finished successfully Jul 14 22:41:26.067728 kernel: audit: type=1130 audit(1752532886.058:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:26.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:26.058708 systemd[1]: Finished ignition-mount.service. Jul 14 22:41:26.060621 systemd[1]: Starting ignition-files.service... Jul 14 22:41:26.068049 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 22:41:26.078996 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (833) Jul 14 22:41:26.081185 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:41:26.081256 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:41:26.081266 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:41:26.086353 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 22:41:26.095876 ignition[852]: INFO : Ignition 2.14.0 Jul 14 22:41:26.095876 ignition[852]: INFO : Stage: files Jul 14 22:41:26.097678 ignition[852]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:26.097678 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:26.100677 ignition[852]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:41:26.102334 ignition[852]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:41:26.102334 ignition[852]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:41:26.105473 ignition[852]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:41:26.105473 ignition[852]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:41:26.105473 ignition[852]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:41:26.105208 unknown[852]: wrote ssh authorized keys file for user: core Jul 14 22:41:26.110909 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:41:26.110909 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:41:26.110909 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:41:26.110909 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:41:26.157710 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:41:26.381767 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:41:26.381767 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:41:26.418401 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:41:26.418401 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:41:26.422148 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:41:26.423990 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:41:26.425971 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:41:26.427813 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:41:26.429682 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:41:26.431624 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:41:26.433540 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:41:26.435405 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:41:26.438036 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:41:26.440687 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:41:26.442933 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:41:42.165096 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:41:42.598302 ignition[852]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:41:42.598302 ignition[852]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 14 22:41:42.602264 ignition[852]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:41:42.604624 ignition[852]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:41:42.604624 ignition[852]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 14 22:41:42.604624 ignition[852]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 14 22:41:42.609810 ignition[852]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:41:42.611906 ignition[852]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:41:42.611906 ignition[852]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 14 22:41:42.611906 ignition[852]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 14 22:41:42.644246 ignition[852]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:41:42.646437 ignition[852]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:41:42.648542 ignition[852]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 14 22:41:42.648542 ignition[852]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:41:42.651551 ignition[852]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:41:42.651551 ignition[852]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:41:42.651551 ignition[852]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:41:42.673472 ignition[852]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:41:42.691272 ignition[852]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:41:42.691272 ignition[852]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:41:42.691272 ignition[852]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:41:42.691272 ignition[852]: INFO : files: files passed Jul 14 22:41:42.691272 ignition[852]: INFO : Ignition finished successfully Jul 14 22:41:42.742600 kernel: audit: type=1130 audit(1752532902.691:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.742630 kernel: audit: type=1130 audit(1752532902.730:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.742641 kernel: audit: type=1130 audit(1752532902.734:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.742651 kernel: audit: type=1131 audit(1752532902.734:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.674934 systemd[1]: Finished ignition-files.service. Jul 14 22:41:42.692062 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 22:41:42.696601 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 22:41:42.746668 initrd-setup-root-after-ignition[877]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 22:41:42.697151 systemd[1]: Starting ignition-quench.service... Jul 14 22:41:42.748987 initrd-setup-root-after-ignition[879]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:41:42.727377 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 22:41:42.730314 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:41:42.730374 systemd[1]: Finished ignition-quench.service. Jul 14 22:41:42.762311 kernel: audit: type=1130 audit(1752532902.752:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.762343 kernel: audit: type=1131 audit(1752532902.752:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.734730 systemd[1]: Reached target ignition-complete.target. Jul 14 22:41:42.740900 systemd[1]: Starting initrd-parse-etc.service... Jul 14 22:41:42.752152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:41:42.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.752218 systemd[1]: Finished initrd-parse-etc.service. Jul 14 22:41:42.813059 kernel: audit: type=1130 audit(1752532902.808:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.753937 systemd[1]: Reached target initrd-fs.target. Jul 14 22:41:42.761480 systemd[1]: Reached target initrd.target. Jul 14 22:41:42.762317 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 22:41:42.762891 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 22:41:42.772790 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 22:41:42.809476 systemd[1]: Starting initrd-cleanup.service... Jul 14 22:41:42.817606 systemd[1]: Stopped target nss-lookup.target. Jul 14 22:41:42.819125 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 22:41:42.820690 systemd[1]: Stopped target timers.target. Jul 14 22:41:42.822340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:41:42.828677 kernel: audit: type=1131 audit(1752532902.823:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.822433 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 22:41:42.823861 systemd[1]: Stopped target initrd.target. Jul 14 22:41:42.828716 systemd[1]: Stopped target basic.target. Jul 14 22:41:42.829576 systemd[1]: Stopped target ignition-complete.target. Jul 14 22:41:42.831353 systemd[1]: Stopped target ignition-diskful.target. Jul 14 22:41:42.833087 systemd[1]: Stopped target initrd-root-device.target. Jul 14 22:41:42.834791 systemd[1]: Stopped target remote-fs.target. Jul 14 22:41:42.836647 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 22:41:42.838566 systemd[1]: Stopped target sysinit.target. Jul 14 22:41:42.840230 systemd[1]: Stopped target local-fs.target. Jul 14 22:41:42.841767 systemd[1]: Stopped target local-fs-pre.target. Jul 14 22:41:42.850595 kernel: audit: type=1131 audit(1752532902.846:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.843379 systemd[1]: Stopped target swap.target. Jul 14 22:41:42.844754 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:41:42.856885 kernel: audit: type=1131 audit(1752532902.852:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.844843 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 22:41:42.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.846328 systemd[1]: Stopped target cryptsetup.target. Jul 14 22:41:42.850631 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:41:42.850732 systemd[1]: Stopped dracut-initqueue.service. Jul 14 22:41:42.852414 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:41:42.852497 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 22:41:42.857025 systemd[1]: Stopped target paths.target. Jul 14 22:41:42.858405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:41:42.863019 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 22:41:42.864474 systemd[1]: Stopped target slices.target. Jul 14 22:41:42.866213 systemd[1]: Stopped target sockets.target. Jul 14 22:41:42.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.867808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:41:42.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.867903 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 22:41:42.900619 iscsid[713]: iscsid shutting down. Jul 14 22:41:42.895444 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:41:42.895559 systemd[1]: Stopped ignition-files.service. Jul 14 22:41:42.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.906155 ignition[893]: INFO : Ignition 2.14.0 Jul 14 22:41:42.906155 ignition[893]: INFO : Stage: umount Jul 14 22:41:42.906155 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:41:42.906155 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:41:42.906155 ignition[893]: INFO : umount: umount passed Jul 14 22:41:42.906155 ignition[893]: INFO : Ignition finished successfully Jul 14 22:41:42.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.897816 systemd[1]: Stopping ignition-mount.service... Jul 14 22:41:42.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.898994 systemd[1]: Stopping iscsid.service... Jul 14 22:41:42.901142 systemd[1]: Stopping sysroot-boot.service... Jul 14 22:41:42.902411 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:41:42.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.902590 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 22:41:42.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.904421 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:41:42.904508 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 22:41:42.907576 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 22:41:42.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.907670 systemd[1]: Stopped iscsid.service. Jul 14 22:41:42.908754 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:41:42.908840 systemd[1]: Stopped ignition-mount.service. Jul 14 22:41:42.910582 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:41:42.910643 systemd[1]: Closed iscsid.socket. Jul 14 22:41:42.912290 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:41:42.912320 systemd[1]: Stopped ignition-disks.service. Jul 14 22:41:42.914016 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:41:42.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.914047 systemd[1]: Stopped ignition-kargs.service. Jul 14 22:41:42.915559 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:41:42.915591 systemd[1]: Stopped ignition-setup.service. Jul 14 22:41:42.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.916459 systemd[1]: Stopping iscsiuio.service... Jul 14 22:41:42.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.918594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:41:42.919022 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 22:41:42.919088 systemd[1]: Stopped iscsiuio.service. Jul 14 22:41:42.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.920191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:41:42.920257 systemd[1]: Finished initrd-cleanup.service. Jul 14 22:41:42.921908 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:41:43.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.921993 systemd[1]: Stopped sysroot-boot.service. Jul 14 22:41:43.005000 audit: BPF prog-id=6 op=UNLOAD Jul 14 22:41:42.924120 systemd[1]: Stopped target network.target. Jul 14 22:41:42.925183 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:41:43.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.925210 systemd[1]: Closed iscsiuio.socket. Jul 14 22:41:43.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:43.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.926676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:41:42.926708 systemd[1]: Stopped initrd-setup-root.service. Jul 14 22:41:43.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.980079 systemd[1]: Stopping systemd-networkd.service... Jul 14 22:41:43.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.980636 systemd[1]: Stopping systemd-resolved.service... Jul 14 22:41:43.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.986002 systemd-networkd[708]: eth0: DHCPv6 lease lost Jul 14 22:41:43.078000 audit: BPF prog-id=9 op=UNLOAD Jul 14 22:41:43.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.987078 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:41:42.987152 systemd[1]: Stopped systemd-networkd.service. Jul 14 22:41:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:43.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:42.990301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:41:42.990330 systemd[1]: Closed systemd-networkd.socket. Jul 14 22:41:42.992364 systemd[1]: Stopping network-cleanup.service... Jul 14 22:41:42.993375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:41:42.993415 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 22:41:42.994335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:41:43.090000 audit: BPF prog-id=8 op=UNLOAD Jul 14 22:41:43.090000 audit: BPF prog-id=7 op=UNLOAD Jul 14 22:41:42.994370 systemd[1]: Stopped systemd-sysctl.service. Jul 14 22:41:43.091000 audit: BPF prog-id=5 op=UNLOAD Jul 14 22:41:43.091000 audit: BPF prog-id=4 op=UNLOAD Jul 14 22:41:43.091000 audit: BPF prog-id=3 op=UNLOAD Jul 14 22:41:42.995814 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:41:42.995849 systemd[1]: Stopped systemd-modules-load.service. Jul 14 22:41:42.996770 systemd[1]: Stopping systemd-udevd.service... Jul 14 22:41:42.998933 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 22:41:42.999301 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:41:42.999383 systemd[1]: Stopped systemd-resolved.service. Jul 14 22:41:43.002903 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:41:43.003030 systemd[1]: Stopped systemd-udevd.service. Jul 14 22:41:43.006118 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:41:43.139622 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Jul 14 22:41:43.006161 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 22:41:43.007105 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:41:43.007135 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 22:41:43.008630 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:41:43.008663 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 22:41:43.010523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:41:43.010565 systemd[1]: Stopped dracut-cmdline.service. Jul 14 22:41:43.068644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:41:43.068683 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 22:41:43.070811 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 22:41:43.072434 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:41:43.072473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 22:41:43.074380 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:41:43.074418 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 22:41:43.076152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:41:43.076195 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 22:41:43.078028 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 22:41:43.078409 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:41:43.078483 systemd[1]: Stopped network-cleanup.service. Jul 14 22:41:43.080228 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:41:43.080284 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 22:41:43.082345 systemd[1]: Reached target initrd-switch-root.target. Jul 14 22:41:43.084799 systemd[1]: Starting initrd-switch-root.service... Jul 14 22:41:43.090115 systemd[1]: Switching root. Jul 14 22:41:43.146890 systemd-journald[197]: Journal stopped Jul 14 22:41:45.779650 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 22:41:45.779705 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 22:41:45.779723 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 22:41:45.779733 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:41:45.779743 kernel: SELinux: policy capability open_perms=1 Jul 14 22:41:45.779760 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:41:45.779773 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:41:45.779783 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:41:45.779793 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:41:45.779804 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:41:45.779814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:41:45.779827 systemd[1]: Successfully loaded SELinux policy in 41.577ms. Jul 14 22:41:45.779844 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.786ms. Jul 14 22:41:45.779855 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:41:45.779866 systemd[1]: Detected virtualization kvm. Jul 14 22:41:45.779876 systemd[1]: Detected architecture x86-64. Jul 14 22:41:45.779897 systemd[1]: Detected first boot. Jul 14 22:41:45.779909 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:41:45.779919 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 22:41:45.779930 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:41:45.779940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:41:45.779952 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:41:45.780005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:41:45.780017 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:41:45.780030 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 22:41:45.780040 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 22:41:45.780051 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 22:41:45.780061 systemd[1]: Created slice system-getty.slice. Jul 14 22:41:45.780071 systemd[1]: Created slice system-modprobe.slice. Jul 14 22:41:45.780081 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 22:41:45.780093 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 22:41:45.780105 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 22:41:45.780118 systemd[1]: Created slice user.slice. Jul 14 22:41:45.780130 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:41:45.780140 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 22:41:45.780150 systemd[1]: Set up automount boot.automount. Jul 14 22:41:45.780160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 22:41:45.780170 systemd[1]: Reached target integritysetup.target. Jul 14 22:41:45.780180 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:41:45.780190 systemd[1]: Reached target remote-fs.target. Jul 14 22:41:45.780202 systemd[1]: Reached target slices.target. Jul 14 22:41:45.780213 systemd[1]: Reached target swap.target. Jul 14 22:41:45.780224 systemd[1]: Reached target torcx.target. Jul 14 22:41:45.780234 systemd[1]: Reached target veritysetup.target. Jul 14 22:41:45.780244 systemd[1]: Listening on systemd-coredump.socket. Jul 14 22:41:45.780253 systemd[1]: Listening on systemd-initctl.socket. Jul 14 22:41:45.780264 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 22:41:45.780274 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 22:41:45.780284 systemd[1]: Listening on systemd-journald.socket. Jul 14 22:41:45.780294 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:41:45.780304 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:41:45.780316 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:41:45.780326 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 22:41:45.780337 systemd[1]: Mounting dev-hugepages.mount... Jul 14 22:41:45.780347 systemd[1]: Mounting dev-mqueue.mount... Jul 14 22:41:45.780356 systemd[1]: Mounting media.mount... Jul 14 22:41:45.780366 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:45.780376 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 22:41:45.780387 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 22:41:45.780397 systemd[1]: Mounting tmp.mount... Jul 14 22:41:45.780408 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 22:41:45.780419 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:41:45.780429 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:41:45.780439 systemd[1]: Starting modprobe@configfs.service... Jul 14 22:41:45.780449 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:41:45.780458 systemd[1]: Starting modprobe@drm.service... Jul 14 22:41:45.780469 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:41:45.780479 systemd[1]: Starting modprobe@fuse.service... Jul 14 22:41:45.780488 systemd[1]: Starting modprobe@loop.service... Jul 14 22:41:45.780504 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:41:45.780514 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:41:45.780524 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:41:45.780534 systemd[1]: Starting systemd-journald.service... Jul 14 22:41:45.780544 kernel: loop: module loaded Jul 14 22:41:45.780554 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:41:45.780564 kernel: fuse: init (API version 7.34) Jul 14 22:41:45.780574 systemd[1]: Starting systemd-network-generator.service... Jul 14 22:41:45.780589 systemd[1]: Starting systemd-remount-fs.service... Jul 14 22:41:45.780606 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:41:45.780616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:45.780629 systemd-journald[1039]: Journal started Jul 14 22:41:45.780669 systemd-journald[1039]: Runtime Journal (/run/log/journal/038425e46b4644daab47ebf9a0f33273) is 6.0M, max 48.4M, 42.4M free. Jul 14 22:41:45.677000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 22:41:45.677000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 14 22:41:45.776000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 22:41:45.776000 audit[1039]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe92e3fb60 a2=4000 a3=7ffe92e3fbfc items=0 ppid=1 pid=1039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:45.776000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 22:41:45.786991 systemd[1]: Started systemd-journald.service. Jul 14 22:41:45.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.796818 systemd[1]: Mounted dev-hugepages.mount. Jul 14 22:41:45.797833 systemd[1]: Mounted dev-mqueue.mount. Jul 14 22:41:45.798825 systemd[1]: Mounted media.mount. Jul 14 22:41:45.799763 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 22:41:45.800681 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 22:41:45.801639 systemd[1]: Mounted tmp.mount. Jul 14 22:41:45.802753 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 22:41:45.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.804102 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:41:45.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.805195 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:41:45.805391 systemd[1]: Finished modprobe@configfs.service. Jul 14 22:41:45.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.806595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:41:45.806823 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:41:45.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.807873 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:41:45.808099 systemd[1]: Finished modprobe@drm.service. Jul 14 22:41:45.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.809170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:41:45.809375 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:41:45.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.810500 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:41:45.810693 systemd[1]: Finished modprobe@fuse.service. Jul 14 22:41:45.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.811697 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:41:45.811932 systemd[1]: Finished modprobe@loop.service. Jul 14 22:41:45.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.813275 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:41:45.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.814593 systemd[1]: Finished systemd-network-generator.service. Jul 14 22:41:45.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.815885 systemd[1]: Finished systemd-remount-fs.service. Jul 14 22:41:45.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.817107 systemd[1]: Reached target network-pre.target. Jul 14 22:41:45.819705 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 22:41:45.821768 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 22:41:45.823017 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:41:45.824833 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 22:41:45.827257 systemd[1]: Starting systemd-journal-flush.service... Jul 14 22:41:45.828533 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:41:45.829733 systemd[1]: Starting systemd-random-seed.service... Jul 14 22:41:45.830896 systemd-journald[1039]: Time spent on flushing to /var/log/journal/038425e46b4644daab47ebf9a0f33273 is 13.593ms for 1114 entries. Jul 14 22:41:45.830896 systemd-journald[1039]: System Journal (/var/log/journal/038425e46b4644daab47ebf9a0f33273) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:41:46.105987 systemd-journald[1039]: Received client request to flush runtime journal. Jul 14 22:41:45.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:45.830847 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:41:45.832041 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:41:45.835747 systemd[1]: Starting systemd-sysusers.service... Jul 14 22:41:45.839690 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:41:46.106811 udevadm[1075]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:41:45.840766 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 22:41:45.841752 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 22:41:45.864822 systemd[1]: Starting systemd-udev-settle.service... Jul 14 22:41:45.892411 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:41:45.893539 systemd[1]: Finished systemd-sysusers.service. Jul 14 22:41:45.895722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 22:41:45.953411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 22:41:45.996680 systemd[1]: Finished systemd-random-seed.service. Jul 14 22:41:46.006789 systemd[1]: Reached target first-boot-complete.target. Jul 14 22:41:46.107171 systemd[1]: Finished systemd-journal-flush.service. Jul 14 22:41:46.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.537217 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 22:41:46.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.548689 systemd[1]: Starting systemd-udevd.service... Jul 14 22:41:46.566136 systemd-udevd[1086]: Using default interface naming scheme 'v252'. Jul 14 22:41:46.578996 systemd[1]: Started systemd-udevd.service. Jul 14 22:41:46.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.615360 systemd[1]: Starting systemd-networkd.service... Jul 14 22:41:46.632536 systemd[1]: Found device dev-ttyS0.device. Jul 14 22:41:46.654992 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:41:46.660627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:41:46.662977 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:41:46.664627 systemd[1]: Starting systemd-userdbd.service... Jul 14 22:41:46.672000 audit[1095]: AVC avc: denied { confidentiality } for pid=1095 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 14 22:41:46.672000 audit[1095]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ab3cfdcd40 a1=338ac a2=7fec42c79bc5 a3=5 items=110 ppid=1086 pid=1095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:46.672000 audit: CWD cwd="/" Jul 14 22:41:46.672000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=1 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=2 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=3 name=(null) inode=10926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=4 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=5 name=(null) inode=10927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=6 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=7 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=8 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=9 name=(null) inode=10929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=10 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=11 name=(null) inode=10930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=12 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=13 name=(null) inode=10931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=14 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=15 name=(null) inode=10932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=16 name=(null) inode=10928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=17 name=(null) inode=10933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=18 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=19 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=20 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=21 name=(null) inode=10935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=22 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=23 name=(null) inode=10936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=24 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=25 name=(null) inode=10937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=26 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=27 name=(null) inode=10938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=28 name=(null) inode=10934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=29 name=(null) inode=10939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=30 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=31 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=32 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=33 name=(null) inode=10941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=34 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=35 name=(null) inode=10942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=36 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=37 name=(null) inode=10943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=38 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=39 name=(null) inode=10944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=40 name=(null) inode=10940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=41 name=(null) inode=10945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=42 name=(null) inode=10925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=43 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=44 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=45 name=(null) inode=10947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=46 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=47 name=(null) inode=10948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=48 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=49 name=(null) inode=10949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=50 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=51 name=(null) inode=10950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=52 name=(null) inode=10946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=53 name=(null) inode=10951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=55 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=56 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=57 name=(null) inode=10953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=58 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=59 name=(null) inode=10954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=60 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=61 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=62 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=63 name=(null) inode=10956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=64 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=65 name=(null) inode=10957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=66 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=67 name=(null) inode=10958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=68 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=69 name=(null) inode=10959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=70 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=71 name=(null) inode=10960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=72 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=73 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=74 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=75 name=(null) inode=10962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=76 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=77 name=(null) inode=10963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=78 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=79 name=(null) inode=10964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=80 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=81 name=(null) inode=10965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=82 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=83 name=(null) inode=10966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=84 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=85 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=86 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=87 name=(null) inode=10968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=88 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=89 name=(null) inode=10969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=90 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=91 name=(null) inode=10970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=92 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=93 name=(null) inode=10971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=94 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=95 name=(null) inode=10972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=96 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=97 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=98 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=99 name=(null) inode=10974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=100 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=101 name=(null) inode=10975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=102 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=103 name=(null) inode=10976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=104 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=105 name=(null) inode=10977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=106 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=107 name=(null) inode=10978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PATH item=109 name=(null) inode=10979 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:41:46.672000 audit: PROCTITLE proctitle="(udev-worker)" Jul 14 22:41:46.695978 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 14 22:41:46.704632 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:41:46.704751 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:41:46.704874 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:41:46.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.698277 systemd[1]: Started systemd-userdbd.service. Jul 14 22:41:46.744996 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:41:46.758987 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:41:46.770538 systemd-networkd[1105]: lo: Link UP Jul 14 22:41:46.770553 systemd-networkd[1105]: lo: Gained carrier Jul 14 22:41:46.771186 systemd-networkd[1105]: Enumeration completed Jul 14 22:41:46.771345 systemd[1]: Started systemd-networkd.service. Jul 14 22:41:46.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.772542 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:41:46.773385 systemd-networkd[1105]: eth0: Link UP Jul 14 22:41:46.773400 systemd-networkd[1105]: eth0: Gained carrier Jul 14 22:41:46.787832 kernel: kvm: Nested Virtualization enabled Jul 14 22:41:46.787904 kernel: SVM: kvm: Nested Paging enabled Jul 14 22:41:46.787920 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 14 22:41:46.788524 kernel: SVM: Virtual GIF supported Jul 14 22:41:46.798113 systemd-networkd[1105]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:41:46.804992 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:41:46.826336 systemd[1]: Finished systemd-udev-settle.service. Jul 14 22:41:46.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.832886 systemd[1]: Starting lvm2-activation-early.service... Jul 14 22:41:46.839693 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:41:46.865842 systemd[1]: Finished lvm2-activation-early.service. Jul 14 22:41:46.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.866930 systemd[1]: Reached target cryptsetup.target. Jul 14 22:41:46.869179 systemd[1]: Starting lvm2-activation.service... Jul 14 22:41:46.872398 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:41:46.895763 systemd[1]: Finished lvm2-activation.service. Jul 14 22:41:46.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.896723 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:41:46.897696 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:41:46.897712 systemd[1]: Reached target local-fs.target. Jul 14 22:41:46.898615 systemd[1]: Reached target machines.target. Jul 14 22:41:46.900819 systemd[1]: Starting ldconfig.service... Jul 14 22:41:46.902019 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:41:46.902076 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:46.903332 systemd[1]: Starting systemd-boot-update.service... Jul 14 22:41:46.905300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 22:41:46.907776 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 22:41:46.909731 systemd[1]: Starting systemd-sysext.service... Jul 14 22:41:46.910993 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) Jul 14 22:41:46.912382 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 22:41:46.916261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 22:41:46.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.921614 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 22:41:46.926279 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 22:41:46.926515 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 22:41:46.936997 kernel: loop0: detected capacity change from 0 to 221472 Jul 14 22:41:46.952485 systemd-fsck[1138]: fsck.fat 4.2 (2021-01-31) Jul 14 22:41:46.952485 systemd-fsck[1138]: /dev/vda1: 791 files, 120745/258078 clusters Jul 14 22:41:46.954158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 22:41:46.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:46.960572 systemd[1]: Mounting boot.mount... Jul 14 22:41:47.021213 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:41:47.043704 systemd[1]: Mounted boot.mount. Jul 14 22:41:48.465228 systemd-networkd[1105]: eth0: Gained IPv6LL Jul 14 22:41:48.625977 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:41:48.626994 systemd[1]: Finished systemd-boot-update.service. Jul 14 22:41:48.631818 kernel: kauditd_printk_skb: 200 callbacks suppressed Jul 14 22:41:48.631872 kernel: audit: type=1130 audit(1752532908.627:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.674978 kernel: loop1: detected capacity change from 0 to 221472 Jul 14 22:41:48.678232 (sd-sysext)[1148]: Using extensions 'kubernetes'. Jul 14 22:41:48.678524 (sd-sysext)[1148]: Merged extensions into '/usr'. Jul 14 22:41:48.750338 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:48.751710 systemd[1]: Mounting usr-share-oem.mount... Jul 14 22:41:48.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.759779 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:41:48.760768 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:41:48.762414 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:41:48.764185 systemd[1]: Starting modprobe@loop.service... Jul 14 22:41:48.765075 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:41:48.765172 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:48.765261 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:48.767769 systemd[1]: Mounted usr-share-oem.mount. Jul 14 22:41:48.768996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:41:48.769113 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:41:48.770251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:41:48.770361 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:41:48.776540 kernel: audit: type=1130 audit(1752532908.769:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.777432 kernel: audit: type=1131 audit(1752532908.769:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.777931 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:41:48.778091 systemd[1]: Finished modprobe@loop.service. Jul 14 22:41:48.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.780988 kernel: audit: type=1130 audit(1752532908.776:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.781025 kernel: audit: type=1131 audit(1752532908.776:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.805769 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:41:48.805875 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:41:48.806710 systemd[1]: Finished systemd-sysext.service. Jul 14 22:41:48.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.808982 kernel: audit: type=1130 audit(1752532908.804:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.809029 kernel: audit: type=1131 audit(1752532908.804:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.813742 systemd[1]: Starting ensure-sysext.service... Jul 14 22:41:48.815983 kernel: audit: type=1130 audit(1752532908.811:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:48.816884 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 22:41:48.821300 systemd[1]: Reloading. Jul 14 22:41:48.825294 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 22:41:48.825995 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:41:48.827314 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:41:48.869174 /usr/lib/systemd/system-generators/torcx-generator[1181]: time="2025-07-14T22:41:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:41:48.869197 /usr/lib/systemd/system-generators/torcx-generator[1181]: time="2025-07-14T22:41:48Z" level=info msg="torcx already run" Jul 14 22:41:48.986287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:41:48.986303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:41:49.005293 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:41:49.057440 systemd[1]: Finished ldconfig.service. Jul 14 22:41:49.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.060170 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 22:41:49.063007 kernel: audit: type=1130 audit(1752532909.058:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.064777 systemd[1]: Starting audit-rules.service... Jul 14 22:41:49.074006 kernel: audit: type=1130 audit(1752532909.063:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.076363 systemd[1]: Starting clean-ca-certificates.service... Jul 14 22:41:49.078335 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 22:41:49.081044 systemd[1]: Starting systemd-resolved.service... Jul 14 22:41:49.083556 systemd[1]: Starting systemd-timesyncd.service... Jul 14 22:41:49.085618 systemd[1]: Starting systemd-update-utmp.service... Jul 14 22:41:49.087125 systemd[1]: Finished clean-ca-certificates.service. Jul 14 22:41:49.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.092863 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.093198 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.094915 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:41:49.111145 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:41:49.113191 systemd[1]: Starting modprobe@loop.service... Jul 14 22:41:49.114804 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.114951 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:49.115101 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:41:49.115187 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.116275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:41:49.116469 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:41:49.117826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:41:49.118007 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:41:49.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.119317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:41:49.119520 systemd[1]: Finished modprobe@loop.service. Jul 14 22:41:49.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.120733 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:41:49.120852 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.123183 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.123449 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.124735 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:41:49.126636 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:41:49.128757 systemd[1]: Starting modprobe@loop.service... Jul 14 22:41:49.136253 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.136366 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:49.136453 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:41:49.136522 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.137426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:41:49.137573 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:41:49.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.138732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:41:49.138877 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:41:49.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.140125 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:41:49.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.140256 systemd[1]: Finished modprobe@loop.service. Jul 14 22:41:49.141309 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:41:49.141384 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.143487 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.143692 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.144824 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:41:49.146568 systemd[1]: Starting modprobe@drm.service... Jul 14 22:41:49.148196 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:41:49.149885 systemd[1]: Starting modprobe@loop.service... Jul 14 22:41:49.150721 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.150832 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:49.151867 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 22:41:49.152833 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:41:49.152949 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:41:49.154005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:41:49.154129 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:41:49.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.175278 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:41:49.175406 systemd[1]: Finished modprobe@drm.service. Jul 14 22:41:49.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.176674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:41:49.176807 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:41:49.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.178125 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:41:49.178332 systemd[1]: Finished modprobe@loop.service. Jul 14 22:41:49.177000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.181513 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:41:49.181662 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.183384 systemd[1]: Finished ensure-sysext.service. Jul 14 22:41:49.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.189014 systemd[1]: Finished systemd-update-utmp.service. Jul 14 22:41:49.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.195179 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 22:41:49.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:49.205272 augenrules[1280]: No rules Jul 14 22:41:49.205298 systemd[1]: Starting systemd-update-done.service... Jul 14 22:41:49.203000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 22:41:49.203000 audit[1280]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9b3da0e0 a2=420 a3=0 items=0 ppid=1232 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:49.203000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 22:41:49.206112 systemd-resolved[1241]: Positive Trust Anchors: Jul 14 22:41:49.206123 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:41:49.206148 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:41:49.206736 systemd[1]: Finished audit-rules.service. Jul 14 22:41:49.211683 systemd[1]: Finished systemd-update-done.service. Jul 14 22:41:49.213453 systemd[1]: Started systemd-timesyncd.service. Jul 14 22:41:49.214444 systemd-timesyncd[1243]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:41:49.214491 systemd-timesyncd[1243]: Initial clock synchronization to Mon 2025-07-14 22:41:49.220876 UTC. Jul 14 22:41:49.214646 systemd[1]: Reached target time-set.target. Jul 14 22:41:49.215254 systemd-resolved[1241]: Defaulting to hostname 'linux'. Jul 14 22:41:49.216799 systemd[1]: Started systemd-resolved.service. Jul 14 22:41:49.217700 systemd[1]: Reached target network.target. Jul 14 22:41:49.218483 systemd[1]: Reached target nss-lookup.target. Jul 14 22:41:49.219340 systemd[1]: Reached target sysinit.target. Jul 14 22:41:49.220244 systemd[1]: Started motdgen.path. Jul 14 22:41:49.221141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 22:41:49.222371 systemd[1]: Started logrotate.timer. Jul 14 22:41:49.241520 systemd[1]: Started mdadm.timer. Jul 14 22:41:49.242265 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 22:41:49.243133 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:41:49.243154 systemd[1]: Reached target paths.target. Jul 14 22:41:49.243897 systemd[1]: Reached target timers.target. Jul 14 22:41:49.244909 systemd[1]: Listening on dbus.socket. Jul 14 22:41:49.246701 systemd[1]: Starting docker.socket... Jul 14 22:41:49.253230 systemd[1]: Listening on sshd.socket. Jul 14 22:41:49.254123 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:49.254408 systemd[1]: Listening on docker.socket. Jul 14 22:41:49.255209 systemd[1]: Reached target sockets.target. Jul 14 22:41:49.255993 systemd[1]: Reached target basic.target. Jul 14 22:41:49.256844 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:41:49.256880 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.256897 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:41:49.257853 systemd[1]: Starting containerd.service... Jul 14 22:41:49.259560 systemd[1]: Starting dbus.service... Jul 14 22:41:49.261354 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 22:41:49.263565 systemd[1]: Starting extend-filesystems.service... Jul 14 22:41:49.264700 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 22:41:49.266027 systemd[1]: Starting motdgen.service... Jul 14 22:41:49.269200 jq[1295]: false Jul 14 22:41:49.291524 dbus-daemon[1293]: [system] SELinux support is enabled Jul 14 22:41:49.292141 extend-filesystems[1296]: Found loop1 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found sr0 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda1 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda2 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda3 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found usr Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda4 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda6 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda7 Jul 14 22:41:49.293291 extend-filesystems[1296]: Found vda9 Jul 14 22:41:49.293291 extend-filesystems[1296]: Checking size of /dev/vda9 Jul 14 22:41:49.323602 systemd[1]: Starting prepare-helm.service... Jul 14 22:41:49.325455 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 22:41:49.327367 systemd[1]: Starting sshd-keygen.service... Jul 14 22:41:49.329916 systemd[1]: Starting systemd-logind.service... Jul 14 22:41:49.330717 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:41:49.330803 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:41:49.331844 systemd[1]: Starting update-engine.service... Jul 14 22:41:49.348370 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 22:41:49.349829 systemd[1]: Started dbus.service. Jul 14 22:41:49.351412 jq[1311]: true Jul 14 22:41:49.353699 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:41:49.353932 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 22:41:49.355245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:41:49.355272 systemd[1]: Reached target system-config.target. Jul 14 22:41:49.356887 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:41:49.356908 systemd[1]: Reached target user-config.target. Jul 14 22:41:49.390984 jq[1315]: true Jul 14 22:41:49.398524 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:41:49.398733 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 22:41:49.403305 tar[1314]: linux-amd64/helm Jul 14 22:41:49.403382 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:41:49.403577 systemd[1]: Finished motdgen.service. Jul 14 22:41:49.415746 env[1320]: time="2025-07-14T22:41:49.415694480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 22:41:49.434293 env[1320]: time="2025-07-14T22:41:49.434244474Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:41:49.434405 env[1320]: time="2025-07-14T22:41:49.434365501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435561 env[1320]: time="2025-07-14T22:41:49.435526077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435561 env[1320]: time="2025-07-14T22:41:49.435552947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435799 env[1320]: time="2025-07-14T22:41:49.435761989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435799 env[1320]: time="2025-07-14T22:41:49.435790302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435863 env[1320]: time="2025-07-14T22:41:49.435801223Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 22:41:49.435863 env[1320]: time="2025-07-14T22:41:49.435809809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.435910 env[1320]: time="2025-07-14T22:41:49.435864982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.436075 env[1320]: time="2025-07-14T22:41:49.436056020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:41:49.436214 env[1320]: time="2025-07-14T22:41:49.436195181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:41:49.436214 env[1320]: time="2025-07-14T22:41:49.436210280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:41:49.436254 env[1320]: time="2025-07-14T22:41:49.436246888Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 22:41:49.436279 env[1320]: time="2025-07-14T22:41:49.436257799Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:41:49.486853 systemd-logind[1309]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:41:49.486878 systemd-logind[1309]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:41:49.488169 systemd-logind[1309]: New seat seat0. Jul 14 22:41:49.493211 systemd[1]: Started systemd-logind.service. Jul 14 22:41:49.503913 update_engine[1310]: I0714 22:41:49.503778 1310 main.cc:92] Flatcar Update Engine starting Jul 14 22:41:49.505641 systemd[1]: Started update-engine.service. Jul 14 22:41:49.506080 update_engine[1310]: I0714 22:41:49.505662 1310 update_check_scheduler.cc:74] Next update check in 4m4s Jul 14 22:41:49.508372 systemd[1]: Started locksmithd.service. Jul 14 22:41:49.545243 extend-filesystems[1296]: Resized partition /dev/vda9 Jul 14 22:41:49.588049 extend-filesystems[1356]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 22:41:49.578028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:41:49.580070 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 22:41:49.670399 sshd_keygen[1333]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:41:49.690834 locksmithd[1353]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:41:49.696173 systemd[1]: Finished sshd-keygen.service. Jul 14 22:41:49.715068 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:41:49.714376 systemd[1]: Starting issuegen.service... Jul 14 22:41:49.720740 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:41:49.720932 systemd[1]: Finished issuegen.service. Jul 14 22:41:49.723135 systemd[1]: Starting systemd-user-sessions.service... Jul 14 22:41:49.954395 systemd[1]: Finished systemd-user-sessions.service. Jul 14 22:41:49.956744 systemd[1]: Started getty@tty1.service. Jul 14 22:41:49.958486 systemd[1]: Started serial-getty@ttyS0.service. Jul 14 22:41:49.959468 systemd[1]: Reached target getty.target. Jul 14 22:41:50.066793 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 22:41:50.078892 systemd[1]: Reached target network-online.target. Jul 14 22:41:50.158310 systemd[1]: Starting kubelet.service... Jul 14 22:41:50.254260 tar[1314]: linux-amd64/LICENSE Jul 14 22:41:50.254376 tar[1314]: linux-amd64/README.md Jul 14 22:41:50.258584 systemd[1]: Finished prepare-helm.service. Jul 14 22:41:50.498003 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:41:51.166771 extend-filesystems[1356]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:41:51.166771 extend-filesystems[1356]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:41:51.166771 extend-filesystems[1356]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:41:51.188369 extend-filesystems[1296]: Resized filesystem in /dev/vda9 Jul 14 22:41:51.167761 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:41:51.168088 systemd[1]: Finished extend-filesystems.service. Jul 14 22:41:51.343418 env[1320]: time="2025-07-14T22:41:51.343330551Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:41:51.343418 env[1320]: time="2025-07-14T22:41:51.343408346Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343490451Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343548762Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343570913Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343588292Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343604930Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343624794Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343642063Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343661647Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343679437Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.343820 env[1320]: time="2025-07-14T22:41:51.343697408Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:41:51.344085 env[1320]: time="2025-07-14T22:41:51.343852818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:41:51.344085 env[1320]: time="2025-07-14T22:41:51.343955289Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:41:51.344404 env[1320]: time="2025-07-14T22:41:51.344366285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:41:51.344452 env[1320]: time="2025-07-14T22:41:51.344405724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344452 env[1320]: time="2025-07-14T22:41:51.344423996Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:41:51.344514 env[1320]: time="2025-07-14T22:41:51.344479541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344514 env[1320]: time="2025-07-14T22:41:51.344498563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344571 env[1320]: time="2025-07-14T22:41:51.344514629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344571 env[1320]: time="2025-07-14T22:41:51.344528581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344571 env[1320]: time="2025-07-14T22:41:51.344543775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344571 env[1320]: time="2025-07-14T22:41:51.344560804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344673 env[1320]: time="2025-07-14T22:41:51.344575777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344673 env[1320]: time="2025-07-14T22:41:51.344590360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344673 env[1320]: time="2025-07-14T22:41:51.344610285Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:41:51.344810 env[1320]: time="2025-07-14T22:41:51.344742824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344810 env[1320]: time="2025-07-14T22:41:51.344763681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344810 env[1320]: time="2025-07-14T22:41:51.344780098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.344810 env[1320]: time="2025-07-14T22:41:51.344796404Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:41:51.344910 env[1320]: time="2025-07-14T22:41:51.344816480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 22:41:51.344910 env[1320]: time="2025-07-14T22:41:51.344832967Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:41:51.344910 env[1320]: time="2025-07-14T22:41:51.344855046Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 22:41:51.344910 env[1320]: time="2025-07-14T22:41:51.344900640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:41:51.345219 env[1320]: time="2025-07-14T22:41:51.345151866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:41:51.345881 env[1320]: time="2025-07-14T22:41:51.345225391Z" level=info msg="Connect containerd service" Jul 14 22:41:51.345881 env[1320]: time="2025-07-14T22:41:51.345283622Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:41:51.345881 env[1320]: time="2025-07-14T22:41:51.345868561Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:41:51.346151 env[1320]: time="2025-07-14T22:41:51.346087404Z" level=info msg="Start subscribing containerd event" Jul 14 22:41:51.346227 env[1320]: time="2025-07-14T22:41:51.346178079Z" level=info msg="Start recovering state" Jul 14 22:41:51.346227 env[1320]: time="2025-07-14T22:41:51.346183922Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:41:51.346330 env[1320]: time="2025-07-14T22:41:51.346262489Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:41:51.346330 env[1320]: time="2025-07-14T22:41:51.346291494Z" level=info msg="Start event monitor" Jul 14 22:41:51.346330 env[1320]: time="2025-07-14T22:41:51.346321221Z" level=info msg="Start snapshots syncer" Jul 14 22:41:51.348552 env[1320]: time="2025-07-14T22:41:51.346337087Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:41:51.348552 env[1320]: time="2025-07-14T22:41:51.346347811Z" level=info msg="Start streaming server" Jul 14 22:41:51.348552 env[1320]: time="2025-07-14T22:41:51.348459892Z" level=info msg="containerd successfully booted in 1.934254s" Jul 14 22:41:51.346423 systemd[1]: Started containerd.service. Jul 14 22:41:51.437763 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:41:51.438614 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 22:41:52.002555 systemd[1]: Started kubelet.service. Jul 14 22:41:52.036689 systemd[1]: Reached target multi-user.target. Jul 14 22:41:52.038722 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 22:41:52.045282 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 22:41:52.045506 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 22:41:52.047461 systemd[1]: Startup finished in 23.156s (kernel) + 8.844s (userspace) = 32.001s. Jul 14 22:41:52.433953 kubelet[1403]: E0714 22:41:52.433829 1403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:41:52.435526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:41:52.435709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:41:57.692182 systemd[1]: Created slice system-sshd.slice. Jul 14 22:41:57.693303 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:56200.service. Jul 14 22:41:57.732124 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:57.733591 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:57.742132 systemd-logind[1309]: New session 1 of user core. Jul 14 22:41:57.743072 systemd[1]: Created slice user-500.slice. Jul 14 22:41:57.743923 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 22:41:57.752253 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 22:41:57.753461 systemd[1]: Starting user@500.service... Jul 14 22:41:57.755885 (systemd)[1418]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:57.818537 systemd[1418]: Queued start job for default target default.target. Jul 14 22:41:57.818694 systemd[1418]: Reached target paths.target. Jul 14 22:41:57.818707 systemd[1418]: Reached target sockets.target. Jul 14 22:41:57.818718 systemd[1418]: Reached target timers.target. Jul 14 22:41:57.818728 systemd[1418]: Reached target basic.target. Jul 14 22:41:57.818759 systemd[1418]: Reached target default.target. Jul 14 22:41:57.818778 systemd[1418]: Startup finished in 58ms. Jul 14 22:41:57.818846 systemd[1]: Started user@500.service. Jul 14 22:41:57.819631 systemd[1]: Started session-1.scope. Jul 14 22:41:57.868773 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:56208.service. Jul 14 22:41:57.907517 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 56208 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:57.908746 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:57.912448 systemd-logind[1309]: New session 2 of user core. Jul 14 22:41:57.913141 systemd[1]: Started session-2.scope. Jul 14 22:41:57.965507 sshd[1427]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:57.967984 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:41272.service. Jul 14 22:41:57.968520 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:56208.service: Deactivated successfully. Jul 14 22:41:57.969480 systemd-logind[1309]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:41:57.969485 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:41:57.970408 systemd-logind[1309]: Removed session 2. Jul 14 22:41:58.004274 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 41272 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:58.005132 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:58.007828 systemd-logind[1309]: New session 3 of user core. Jul 14 22:41:58.008493 systemd[1]: Started session-3.scope. Jul 14 22:41:58.057886 sshd[1433]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:58.060344 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:41278.service. Jul 14 22:41:58.060862 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:41272.service: Deactivated successfully. Jul 14 22:41:58.061738 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:41:58.061866 systemd-logind[1309]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:41:58.062749 systemd-logind[1309]: Removed session 3. Jul 14 22:41:58.095706 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:58.096587 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:58.099484 systemd-logind[1309]: New session 4 of user core. Jul 14 22:41:58.100224 systemd[1]: Started session-4.scope. Jul 14 22:41:58.153149 sshd[1440]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:58.155743 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:41292.service. Jul 14 22:41:58.156148 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:41278.service: Deactivated successfully. Jul 14 22:41:58.156871 systemd-logind[1309]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:41:58.156937 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:41:58.157683 systemd-logind[1309]: Removed session 4. Jul 14 22:41:58.192314 sshd[1447]: Accepted publickey for core from 10.0.0.1 port 41292 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:58.193424 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:58.196276 systemd-logind[1309]: New session 5 of user core. Jul 14 22:41:58.196928 systemd[1]: Started session-5.scope. Jul 14 22:41:58.249994 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:41:58.250165 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:41:58.258157 dbus-daemon[1293]: \xd0\u000d\xa1\xbd\xaeU: received setenforce notice (enforcing=2056157856) Jul 14 22:41:58.259686 sudo[1452]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:58.260833 sshd[1447]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:58.262945 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:41300.service. Jul 14 22:41:58.263325 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:41292.service: Deactivated successfully. Jul 14 22:41:58.264120 systemd-logind[1309]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:41:58.264145 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:41:58.265266 systemd-logind[1309]: Removed session 5. Jul 14 22:41:58.299498 sshd[1454]: Accepted publickey for core from 10.0.0.1 port 41300 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:58.300344 sshd[1454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:58.303163 systemd-logind[1309]: New session 6 of user core. Jul 14 22:41:58.303813 systemd[1]: Started session-6.scope. Jul 14 22:41:58.355771 sudo[1461]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:41:58.355953 sudo[1461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:41:58.358249 sudo[1461]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:58.362444 sudo[1460]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:41:58.362651 sudo[1460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:41:58.370389 systemd[1]: Stopping audit-rules.service... Jul 14 22:41:58.370000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 14 22:41:58.371542 auditctl[1464]: No rules Jul 14 22:41:58.371801 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:41:58.372011 systemd[1]: Stopped audit-rules.service. Jul 14 22:41:58.372290 kernel: kauditd_printk_skb: 28 callbacks suppressed Jul 14 22:41:58.372330 kernel: audit: type=1305 audit(1752532918.370:160): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 14 22:41:58.370000 audit[1464]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac51a5c0 a2=420 a3=0 items=0 ppid=1 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:58.373376 systemd[1]: Starting audit-rules.service... Jul 14 22:41:58.378380 kernel: audit: type=1300 audit(1752532918.370:160): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac51a5c0 a2=420 a3=0 items=0 ppid=1 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:58.370000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 14 22:41:58.379818 kernel: audit: type=1327 audit(1752532918.370:160): proctitle=2F7362696E2F617564697463746C002D44 Jul 14 22:41:58.379851 kernel: audit: type=1131 audit(1752532918.370:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.388425 augenrules[1482]: No rules Jul 14 22:41:58.388944 systemd[1]: Finished audit-rules.service. Jul 14 22:41:58.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.390578 sudo[1460]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:58.391885 sshd[1454]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:58.395755 kernel: audit: type=1130 audit(1752532918.388:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.395799 kernel: audit: type=1106 audit(1752532918.388:163): pid=1460 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.388000 audit[1460]: USER_END pid=1460 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.394199 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:41314.service. Jul 14 22:41:58.388000 audit[1460]: CRED_DISP pid=1460 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.397969 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:41300.service: Deactivated successfully. Jul 14 22:41:58.398811 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:41:58.399276 systemd-logind[1309]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:41:58.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.399982 systemd-logind[1309]: Removed session 6. Jul 14 22:41:58.402561 kernel: audit: type=1104 audit(1752532918.388:164): pid=1460 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.402604 kernel: audit: type=1130 audit(1752532918.391:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.402626 kernel: audit: type=1106 audit(1752532918.395:166): pid=1454 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.395000 audit[1454]: USER_END pid=1454 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.395000 audit[1454]: CRED_DISP pid=1454 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.409979 kernel: audit: type=1104 audit(1752532918.395:167): pid=1454 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.12:22-10.0.0.1:41300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.433000 audit[1487]: USER_ACCT pid=1487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.435006 sshd[1487]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:41:58.434000 audit[1487]: CRED_ACQ pid=1487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.434000 audit[1487]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcdb55040 a2=3 a3=0 items=0 ppid=1 pid=1487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:58.434000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:41:58.435835 sshd[1487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:41:58.439004 systemd-logind[1309]: New session 7 of user core. Jul 14 22:41:58.439732 systemd[1]: Started session-7.scope. Jul 14 22:41:58.442000 audit[1487]: USER_START pid=1487 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.443000 audit[1492]: CRED_ACQ pid=1492 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:41:58.489000 audit[1493]: USER_ACCT pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.489000 audit[1493]: CRED_REFR pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.490599 sudo[1493]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:41:58.490771 sudo[1493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:41:58.491000 audit[1493]: USER_START pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:41:58.508948 systemd[1]: Starting docker.service... Jul 14 22:41:58.543440 env[1505]: time="2025-07-14T22:41:58.543378844Z" level=info msg="Starting up" Jul 14 22:41:58.544830 env[1505]: time="2025-07-14T22:41:58.544780229Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 22:41:58.544830 env[1505]: time="2025-07-14T22:41:58.544808359Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 22:41:58.544830 env[1505]: time="2025-07-14T22:41:58.544831257Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 22:41:58.544830 env[1505]: time="2025-07-14T22:41:58.544841068Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 22:41:58.546493 env[1505]: time="2025-07-14T22:41:58.546472489Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 22:41:58.546493 env[1505]: time="2025-07-14T22:41:58.546488263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 22:41:58.546563 env[1505]: time="2025-07-14T22:41:58.546503987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 22:41:58.546563 env[1505]: time="2025-07-14T22:41:58.546511883Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 22:41:58.553112 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2087480795-merged.mount: Deactivated successfully. Jul 14 22:41:59.338913 env[1505]: time="2025-07-14T22:41:59.338857193Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 14 22:41:59.338913 env[1505]: time="2025-07-14T22:41:59.338884270Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 14 22:41:59.339147 env[1505]: time="2025-07-14T22:41:59.339081545Z" level=info msg="Loading containers: start." Jul 14 22:41:59.388000 audit[1539]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.388000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb61f8d00 a2=0 a3=7ffcb61f8cec items=0 ppid=1505 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 14 22:41:59.389000 audit[1541]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.389000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcdb8d96d0 a2=0 a3=7ffcdb8d96bc items=0 ppid=1505 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.389000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 14 22:41:59.391000 audit[1543]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.391000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd89d726a0 a2=0 a3=7ffd89d7268c items=0 ppid=1505 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.391000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 14 22:41:59.392000 audit[1545]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.392000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd47b72c40 a2=0 a3=7ffd47b72c2c items=0 ppid=1505 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.392000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 14 22:41:59.394000 audit[1547]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.394000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff96996b0 a2=0 a3=7ffff969969c items=0 ppid=1505 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 14 22:41:59.412000 audit[1552]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.412000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff22c78cf0 a2=0 a3=7fff22c78cdc items=0 ppid=1505 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 14 22:41:59.424000 audit[1555]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.424000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfb96a2c0 a2=0 a3=7ffcfb96a2ac items=0 ppid=1505 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 14 22:41:59.426000 audit[1557]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.426000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffddd880260 a2=0 a3=7ffddd88024c items=0 ppid=1505 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.426000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 14 22:41:59.427000 audit[1559]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.427000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffec0fce050 a2=0 a3=7ffec0fce03c items=0 ppid=1505 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.427000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 22:41:59.436000 audit[1563]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.436000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd712b18c0 a2=0 a3=7ffd712b18ac items=0 ppid=1505 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 14 22:41:59.443000 audit[1564]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.443000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd5f473370 a2=0 a3=7ffd5f47335c items=0 ppid=1505 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 22:41:59.452984 kernel: Initializing XFRM netlink socket Jul 14 22:41:59.478425 env[1505]: time="2025-07-14T22:41:59.478384667Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 22:41:59.491000 audit[1572]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.491000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffca99c3ed0 a2=0 a3=7ffca99c3ebc items=0 ppid=1505 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.491000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 14 22:41:59.501000 audit[1575]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.501000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd20cf80a0 a2=0 a3=7ffd20cf808c items=0 ppid=1505 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.501000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 14 22:41:59.503000 audit[1578]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.503000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeb18a99c0 a2=0 a3=7ffeb18a99ac items=0 ppid=1505 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.503000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 14 22:41:59.505000 audit[1580]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.505000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc7b354370 a2=0 a3=7ffc7b35435c items=0 ppid=1505 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.505000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 14 22:41:59.506000 audit[1582]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.506000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc6aa016a0 a2=0 a3=7ffc6aa0168c items=0 ppid=1505 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.506000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 14 22:41:59.508000 audit[1584]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.508000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd2f8af0d0 a2=0 a3=7ffd2f8af0bc items=0 ppid=1505 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.508000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 14 22:41:59.509000 audit[1586]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.509000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff73fc0cb0 a2=0 a3=7fff73fc0c9c items=0 ppid=1505 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 14 22:41:59.516000 audit[1589]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.516000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd24c99d20 a2=0 a3=7ffd24c99d0c items=0 ppid=1505 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 14 22:41:59.518000 audit[1591]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.518000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff0bbbd010 a2=0 a3=7fff0bbbcffc items=0 ppid=1505 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 14 22:41:59.519000 audit[1593]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.519000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcf17c5880 a2=0 a3=7ffcf17c586c items=0 ppid=1505 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.519000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 14 22:41:59.521000 audit[1595]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.521000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce02d7190 a2=0 a3=7ffce02d717c items=0 ppid=1505 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.521000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 14 22:41:59.523975 systemd-networkd[1105]: docker0: Link UP Jul 14 22:41:59.531000 audit[1599]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.531000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe714a40c0 a2=0 a3=7ffe714a40ac items=0 ppid=1505 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.531000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 14 22:41:59.536000 audit[1600]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:41:59.536000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffceda8ea40 a2=0 a3=7ffceda8ea2c items=0 ppid=1505 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:41:59.536000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 14 22:41:59.538986 env[1505]: time="2025-07-14T22:41:59.538946752Z" level=info msg="Loading containers: done." Jul 14 22:41:59.548651 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4198039404-merged.mount: Deactivated successfully. Jul 14 22:41:59.553828 env[1505]: time="2025-07-14T22:41:59.553786008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:41:59.554133 env[1505]: time="2025-07-14T22:41:59.553950694Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 22:41:59.554133 env[1505]: time="2025-07-14T22:41:59.554054953Z" level=info msg="Daemon has completed initialization" Jul 14 22:41:59.574564 systemd[1]: Started docker.service. Jul 14 22:41:59.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:41:59.577773 env[1505]: time="2025-07-14T22:41:59.577733750Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:42:02.686648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:42:02.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:02.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:02.686823 systemd[1]: Stopped kubelet.service. Jul 14 22:42:02.688137 systemd[1]: Starting kubelet.service... Jul 14 22:42:02.773002 systemd[1]: Started kubelet.service. Jul 14 22:42:02.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:02.810717 kubelet[1642]: E0714 22:42:02.810669 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:02.813160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:02.813340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:02.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:10.262989 env[1320]: time="2025-07-14T22:42:10.262925270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 22:42:13.064346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:42:13.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.064520 systemd[1]: Stopped kubelet.service. Jul 14 22:42:13.065548 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 14 22:42:13.065607 kernel: audit: type=1130 audit(1752532933.063:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.065877 systemd[1]: Starting kubelet.service... Jul 14 22:42:13.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.073362 kernel: audit: type=1131 audit(1752532933.063:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.172732 systemd[1]: Started kubelet.service. Jul 14 22:42:13.177993 kernel: audit: type=1130 audit(1752532933.172:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:13.210077 kubelet[1660]: E0714 22:42:13.210011 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:13.211740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:13.211878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:13.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:13.215984 kernel: audit: type=1131 audit(1752532933.211:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:16.788200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225594258.mount: Deactivated successfully. Jul 14 22:42:18.147276 env[1320]: time="2025-07-14T22:42:18.147203813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:18.149437 env[1320]: time="2025-07-14T22:42:18.149391300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:18.151393 env[1320]: time="2025-07-14T22:42:18.151345884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:18.153103 env[1320]: time="2025-07-14T22:42:18.153064329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:18.153812 env[1320]: time="2025-07-14T22:42:18.153766001Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 22:42:18.154412 env[1320]: time="2025-07-14T22:42:18.154379412Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 22:42:19.965317 env[1320]: time="2025-07-14T22:42:19.965269305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:19.967909 env[1320]: time="2025-07-14T22:42:19.967862076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:19.970230 env[1320]: time="2025-07-14T22:42:19.970196358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:19.972201 env[1320]: time="2025-07-14T22:42:19.972165112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:19.973088 env[1320]: time="2025-07-14T22:42:19.973046038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 22:42:19.973668 env[1320]: time="2025-07-14T22:42:19.973608287Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 22:42:21.737278 env[1320]: time="2025-07-14T22:42:21.737215154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:21.741797 env[1320]: time="2025-07-14T22:42:21.741763560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:21.743810 env[1320]: time="2025-07-14T22:42:21.743772145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:21.745357 env[1320]: time="2025-07-14T22:42:21.745324279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:21.746166 env[1320]: time="2025-07-14T22:42:21.746128942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 22:42:21.746594 env[1320]: time="2025-07-14T22:42:21.746564492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 22:42:23.300185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:42:23.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:23.300388 systemd[1]: Stopped kubelet.service. Jul 14 22:42:23.301826 systemd[1]: Starting kubelet.service... Jul 14 22:42:23.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:23.316976 kernel: audit: type=1130 audit(1752532943.299:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:23.317048 kernel: audit: type=1131 audit(1752532943.299:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:23.384183 systemd[1]: Started kubelet.service. Jul 14 22:42:23.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:23.429997 kernel: audit: type=1130 audit(1752532943.383:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:24.029862 kubelet[1678]: E0714 22:42:24.029806 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:24.031743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:24.031888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:24.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:24.036997 kernel: audit: type=1131 audit(1752532944.031:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:25.913051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856852936.mount: Deactivated successfully. Jul 14 22:42:26.520059 env[1320]: time="2025-07-14T22:42:26.519987697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:26.524088 env[1320]: time="2025-07-14T22:42:26.524025126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:26.525438 env[1320]: time="2025-07-14T22:42:26.525391903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:26.526724 env[1320]: time="2025-07-14T22:42:26.526676662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:26.527142 env[1320]: time="2025-07-14T22:42:26.527112436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 22:42:26.527531 env[1320]: time="2025-07-14T22:42:26.527511119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:42:33.050272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359533197.mount: Deactivated successfully. Jul 14 22:42:34.050206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:42:34.050420 systemd[1]: Stopped kubelet.service. Jul 14 22:42:34.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.051843 systemd[1]: Starting kubelet.service... Jul 14 22:42:34.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.056599 kernel: audit: type=1130 audit(1752532954.049:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.056726 kernel: audit: type=1131 audit(1752532954.049:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.136342 systemd[1]: Started kubelet.service. Jul 14 22:42:34.140997 kernel: audit: type=1130 audit(1752532954.135:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:34.170017 kubelet[1695]: E0714 22:42:34.169948 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:34.171429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:34.171577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:34.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:34.176991 kernel: audit: type=1131 audit(1752532954.170:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:35.082659 update_engine[1310]: I0714 22:42:35.082566 1310 update_attempter.cc:509] Updating boot flags... Jul 14 22:42:35.635089 env[1320]: time="2025-07-14T22:42:35.635016809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:35.812980 env[1320]: time="2025-07-14T22:42:35.812823682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:35.838813 env[1320]: time="2025-07-14T22:42:35.838767884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:35.850784 env[1320]: time="2025-07-14T22:42:35.850716628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:35.851609 env[1320]: time="2025-07-14T22:42:35.851572792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:42:35.852127 env[1320]: time="2025-07-14T22:42:35.852095914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:42:36.941711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319127355.mount: Deactivated successfully. Jul 14 22:42:36.947854 env[1320]: time="2025-07-14T22:42:36.947826000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:36.949579 env[1320]: time="2025-07-14T22:42:36.949560498Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:36.951386 env[1320]: time="2025-07-14T22:42:36.951353325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:36.953168 env[1320]: time="2025-07-14T22:42:36.953133510Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:36.953655 env[1320]: time="2025-07-14T22:42:36.953630120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:42:36.954141 env[1320]: time="2025-07-14T22:42:36.954092577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:42:37.726766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561894843.mount: Deactivated successfully. Jul 14 22:42:44.300154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:42:44.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.300321 systemd[1]: Stopped kubelet.service. Jul 14 22:42:44.307162 systemd[1]: Starting kubelet.service... Jul 14 22:42:44.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.444226 kernel: audit: type=1130 audit(1752532964.299:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.444283 kernel: audit: type=1131 audit(1752532964.299:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.516181 systemd[1]: Started kubelet.service. Jul 14 22:42:44.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.555808 kubelet[1726]: E0714 22:42:44.555690 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:44.557776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:44.557906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:44.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:44.566578 kernel: audit: type=1130 audit(1752532964.515:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:44.566617 kernel: audit: type=1131 audit(1752532964.557:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:46.632927 env[1320]: time="2025-07-14T22:42:46.632872211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:46.635912 env[1320]: time="2025-07-14T22:42:46.635878409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:46.638210 env[1320]: time="2025-07-14T22:42:46.638175240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:46.640671 env[1320]: time="2025-07-14T22:42:46.640639426Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:42:46.641476 env[1320]: time="2025-07-14T22:42:46.641437922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:42:54.800210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 14 22:42:54.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.800397 systemd[1]: Stopped kubelet.service. Jul 14 22:42:54.801696 systemd[1]: Starting kubelet.service... Jul 14 22:42:54.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.822225 kernel: audit: type=1130 audit(1752532974.799:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.822284 kernel: audit: type=1131 audit(1752532974.799:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.962660 systemd[1]: Started kubelet.service. Jul 14 22:42:54.966990 kernel: audit: type=1130 audit(1752532974.961:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:54.997066 kubelet[1748]: E0714 22:42:54.997010 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:42:54.998792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:42:54.998929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:42:54.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:55.002981 kernel: audit: type=1131 audit(1752532974.998:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 14 22:42:59.262453 systemd[1]: Stopped kubelet.service. Jul 14 22:42:59.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:59.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:59.266161 systemd[1]: Starting kubelet.service... Jul 14 22:42:59.271197 kernel: audit: type=1130 audit(1752532979.261:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:59.271243 kernel: audit: type=1131 audit(1752532979.262:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:42:59.283859 systemd[1]: Reloading. Jul 14 22:42:59.351754 /usr/lib/systemd/system-generators/torcx-generator[1799]: time="2025-07-14T22:42:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:42:59.352103 /usr/lib/systemd/system-generators/torcx-generator[1799]: time="2025-07-14T22:42:59Z" level=info msg="torcx already run" Jul 14 22:42:59.921805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:42:59.921821 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:42:59.940103 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:43:00.004680 systemd[1]: Started kubelet.service. Jul 14 22:43:00.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.008994 kernel: audit: type=1130 audit(1752532980.003:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.010275 systemd[1]: Stopping kubelet.service... Jul 14 22:43:00.012218 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:43:00.012450 systemd[1]: Stopped kubelet.service. Jul 14 22:43:00.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.013894 systemd[1]: Starting kubelet.service... Jul 14 22:43:00.015996 kernel: audit: type=1131 audit(1752532980.011:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.097561 systemd[1]: Started kubelet.service. Jul 14 22:43:00.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.103001 kernel: audit: type=1130 audit(1752532980.096:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:00.216778 kubelet[1862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:43:00.216778 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:43:00.216778 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:43:00.217264 kubelet[1862]: I0714 22:43:00.216767 1862 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:43:00.567452 kubelet[1862]: I0714 22:43:00.567387 1862 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:43:00.567452 kubelet[1862]: I0714 22:43:00.567430 1862 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:43:00.567724 kubelet[1862]: I0714 22:43:00.567700 1862 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:43:00.640059 kubelet[1862]: E0714 22:43:00.639985 1862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:00.641040 kubelet[1862]: I0714 22:43:00.641013 1862 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:43:00.648385 kubelet[1862]: E0714 22:43:00.648357 1862 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:43:00.648470 kubelet[1862]: I0714 22:43:00.648390 1862 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:43:00.653839 kubelet[1862]: I0714 22:43:00.653815 1862 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:43:00.654693 kubelet[1862]: I0714 22:43:00.654667 1862 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:43:00.654821 kubelet[1862]: I0714 22:43:00.654797 1862 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:43:00.655008 kubelet[1862]: I0714 22:43:00.654821 1862 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:43:00.655134 kubelet[1862]: I0714 22:43:00.655014 1862 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:43:00.655134 kubelet[1862]: I0714 22:43:00.655022 1862 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:43:00.655134 kubelet[1862]: I0714 22:43:00.655128 1862 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:43:00.664093 kubelet[1862]: I0714 22:43:00.664061 1862 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:43:00.664093 kubelet[1862]: I0714 22:43:00.664093 1862 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:43:00.664179 kubelet[1862]: I0714 22:43:00.664124 1862 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:43:00.664179 kubelet[1862]: I0714 22:43:00.664137 1862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:43:00.683939 kubelet[1862]: W0714 22:43:00.683890 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:00.684048 kubelet[1862]: E0714 22:43:00.683952 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:00.684862 kubelet[1862]: W0714 22:43:00.684791 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:00.685040 kubelet[1862]: E0714 22:43:00.684865 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:00.687344 kubelet[1862]: I0714 22:43:00.687298 1862 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 22:43:00.687921 kubelet[1862]: I0714 22:43:00.687906 1862 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:43:00.688608 kubelet[1862]: W0714 22:43:00.688594 1862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:43:00.690470 kubelet[1862]: I0714 22:43:00.690438 1862 server.go:1274] "Started kubelet" Jul 14 22:43:00.690772 kubelet[1862]: I0714 22:43:00.690733 1862 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:43:00.690937 kubelet[1862]: I0714 22:43:00.690527 1862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:43:00.691156 kubelet[1862]: I0714 22:43:00.691128 1862 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:43:00.692210 kubelet[1862]: I0714 22:43:00.692191 1862 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:43:00.700984 kernel: audit: type=1400 audit(1752532980.692:231): avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:00.701090 kernel: audit: type=1401 audit(1752532980.692:231): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:00.701109 kernel: audit: type=1300 audit(1752532980.692:231): arch=c000003e syscall=188 success=no exit=-22 a0=c00002ec30 a1=c000db06c0 a2=c00002ec00 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.692000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:00.692000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:00.692000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00002ec30 a1=c000db06c0 a2=c00002ec00 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.701284 kubelet[1862]: I0714 22:43:00.693410 1862 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 14 22:43:00.701284 kubelet[1862]: I0714 22:43:00.693439 1862 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 14 22:43:00.701284 kubelet[1862]: I0714 22:43:00.693497 1862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:43:00.701284 kubelet[1862]: I0714 22:43:00.698050 1862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:43:00.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:00.707987 kernel: audit: type=1327 audit(1752532980.692:231): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:00.708053 kernel: audit: type=1400 audit(1752532980.692:232): avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:00.692000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:00.708604 kubelet[1862]: E0714 22:43:00.708391 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:00.708931 kubelet[1862]: I0714 22:43:00.708901 1862 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:43:00.709155 kubelet[1862]: E0714 22:43:00.709105 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jul 14 22:43:00.709439 kubelet[1862]: I0714 22:43:00.709422 1862 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:43:00.709600 kubelet[1862]: I0714 22:43:00.709582 1862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:43:00.711025 kubelet[1862]: I0714 22:43:00.711011 1862 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:43:00.711346 kernel: audit: type=1401 audit(1752532980.692:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:00.692000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:00.711568 kubelet[1862]: E0714 22:43:00.710164 1862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f75eb3531c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:43:00.690399684 +0000 UTC m=+0.589601523,LastTimestamp:2025-07-14 22:43:00.690399684 +0000 UTC m=+0.589601523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:43:00.711844 kubelet[1862]: E0714 22:43:00.711644 1862 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:43:00.711844 kubelet[1862]: I0714 22:43:00.711704 1862 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:43:00.712186 kubelet[1862]: W0714 22:43:00.712117 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:00.712246 kubelet[1862]: E0714 22:43:00.712199 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:00.712299 kubelet[1862]: I0714 22:43:00.712280 1862 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:43:00.692000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00042b220 a1=c000db06d8 a2=c00002ecc0 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.717697 kernel: audit: type=1300 audit(1752532980.692:232): arch=c000003e syscall=188 success=no exit=-22 a0=c00042b220 a1=c000db06d8 a2=c00002ecc0 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:00.696000 audit[1876]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.696000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeca294860 a2=0 a3=7ffeca29484c items=0 ppid=1862 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 14 22:43:00.697000 audit[1877]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.697000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb67b6ba0 a2=0 a3=7fffb67b6b8c items=0 ppid=1862 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 14 22:43:00.709000 audit[1879]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.709000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffff0529020 a2=0 a3=7ffff052900c items=0 ppid=1862 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 22:43:00.711000 audit[1881]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.711000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc11c4f60 a2=0 a3=7ffcc11c4f4c items=0 ppid=1862 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 22:43:00.718000 audit[1884]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.718000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd9cdb4fd0 a2=0 a3=7ffd9cdb4fbc items=0 ppid=1862 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 14 22:43:00.719726 kubelet[1862]: I0714 22:43:00.719657 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:43:00.720000 audit[1885]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:00.720000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc8bf4fdb0 a2=0 a3=7ffc8bf4fd9c items=0 ppid=1862 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 14 22:43:00.721592 kubelet[1862]: I0714 22:43:00.721550 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:43:00.721592 kubelet[1862]: I0714 22:43:00.721572 1862 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:43:00.721592 kubelet[1862]: I0714 22:43:00.721590 1862 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:43:00.721674 kubelet[1862]: E0714 22:43:00.721629 1862 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:43:00.721000 audit[1887]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.721000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb44b2a70 a2=0 a3=7ffcb44b2a5c items=0 ppid=1862 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 14 22:43:00.722000 audit[1888]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.722000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdf3a9870 a2=0 a3=7ffcdf3a985c items=0 ppid=1862 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 14 22:43:00.723000 audit[1889]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:00.723000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc51c79100 a2=0 a3=7ffc51c790ec items=0 ppid=1862 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 14 22:43:00.724000 audit[1890]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:00.724000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd479c8050 a2=0 a3=7ffd479c803c items=0 ppid=1862 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 14 22:43:00.725000 audit[1891]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:00.725000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc787fafc0 a2=0 a3=7ffc787fafac items=0 ppid=1862 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 14 22:43:00.726000 audit[1892]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:00.726000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff6f277d60 a2=0 a3=7fff6f277d4c items=0 ppid=1862 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:00.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 14 22:43:00.733460 kubelet[1862]: W0714 22:43:00.733393 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:00.733527 kubelet[1862]: E0714 22:43:00.733469 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:00.734378 kubelet[1862]: I0714 22:43:00.734353 1862 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:43:00.734378 kubelet[1862]: I0714 22:43:00.734368 1862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:43:00.734378 kubelet[1862]: I0714 22:43:00.734383 1862 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:43:00.808978 kubelet[1862]: E0714 22:43:00.808893 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:00.822522 kubelet[1862]: E0714 22:43:00.822426 1862 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:43:00.872181 kubelet[1862]: E0714 22:43:00.872061 1862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f75eb3531c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:43:00.690399684 +0000 UTC m=+0.589601523,LastTimestamp:2025-07-14 22:43:00.690399684 +0000 UTC m=+0.589601523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:43:00.909350 kubelet[1862]: E0714 22:43:00.909289 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:00.909747 kubelet[1862]: E0714 22:43:00.909700 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jul 14 22:43:01.010143 kubelet[1862]: E0714 22:43:01.010079 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:01.022666 kubelet[1862]: E0714 22:43:01.022639 1862 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:43:01.111018 kubelet[1862]: E0714 22:43:01.110890 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:01.211688 kubelet[1862]: E0714 22:43:01.211604 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:01.221873 kubelet[1862]: I0714 22:43:01.221836 1862 policy_none.go:49] "None policy: Start" Jul 14 22:43:01.222771 kubelet[1862]: I0714 22:43:01.222741 1862 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:43:01.222771 kubelet[1862]: I0714 22:43:01.222763 1862 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:43:01.228235 kubelet[1862]: I0714 22:43:01.228212 1862 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:43:01.227000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:01.227000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:01.227000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cd10b0 a1=c000cd2510 a2=c000cd1080 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:01.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:01.228528 kubelet[1862]: I0714 22:43:01.228266 1862 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 14 22:43:01.228528 kubelet[1862]: I0714 22:43:01.228343 1862 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:43:01.228528 kubelet[1862]: I0714 22:43:01.228353 1862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:43:01.228629 kubelet[1862]: I0714 22:43:01.228613 1862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:43:01.229671 kubelet[1862]: E0714 22:43:01.229645 1862 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:43:01.310313 kubelet[1862]: E0714 22:43:01.310254 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jul 14 22:43:01.329949 kubelet[1862]: I0714 22:43:01.329878 1862 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:01.331267 kubelet[1862]: E0714 22:43:01.331216 1862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:43:01.517902 kubelet[1862]: I0714 22:43:01.517373 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:01.517902 kubelet[1862]: I0714 22:43:01.517438 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:43:01.517902 kubelet[1862]: I0714 22:43:01.517471 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:01.517902 kubelet[1862]: I0714 22:43:01.517493 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:01.517902 kubelet[1862]: I0714 22:43:01.517514 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:01.518161 kubelet[1862]: I0714 22:43:01.517534 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:01.518161 kubelet[1862]: I0714 22:43:01.517550 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:01.518161 kubelet[1862]: I0714 22:43:01.517569 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:01.518161 kubelet[1862]: I0714 22:43:01.517591 1862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:01.532541 kubelet[1862]: I0714 22:43:01.532506 1862 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:01.532831 kubelet[1862]: E0714 22:43:01.532804 1862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:43:01.727601 kubelet[1862]: E0714 22:43:01.727537 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:01.728289 env[1320]: time="2025-07-14T22:43:01.728232299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8861c4811c8364d34a961e9db0c049c2,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:01.729286 kubelet[1862]: E0714 22:43:01.729266 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:01.729776 env[1320]: time="2025-07-14T22:43:01.729552688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:01.731859 kubelet[1862]: E0714 22:43:01.731840 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:01.732215 env[1320]: time="2025-07-14T22:43:01.732175091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:01.831599 kubelet[1862]: W0714 22:43:01.831523 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:01.831772 kubelet[1862]: E0714 22:43:01.831603 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:01.934633 kubelet[1862]: I0714 22:43:01.934585 1862 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:01.935046 kubelet[1862]: E0714 22:43:01.935003 1862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:43:01.992504 kubelet[1862]: W0714 22:43:01.992422 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:01.992504 kubelet[1862]: E0714 22:43:01.992496 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:02.111634 kubelet[1862]: E0714 22:43:02.111520 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jul 14 22:43:02.148591 kubelet[1862]: W0714 22:43:02.148514 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:02.148591 kubelet[1862]: E0714 22:43:02.148582 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:02.308337 kubelet[1862]: W0714 22:43:02.308263 1862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:43:02.308337 kubelet[1862]: E0714 22:43:02.308334 1862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:02.643313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603601398.mount: Deactivated successfully. Jul 14 22:43:02.649376 kubelet[1862]: E0714 22:43:02.649335 1862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:43:02.652760 env[1320]: time="2025-07-14T22:43:02.652720379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.655358 env[1320]: time="2025-07-14T22:43:02.655312558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.657105 env[1320]: time="2025-07-14T22:43:02.657061786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.658044 env[1320]: time="2025-07-14T22:43:02.657994822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.660487 env[1320]: time="2025-07-14T22:43:02.660453974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.661623 env[1320]: time="2025-07-14T22:43:02.661544426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.662757 env[1320]: time="2025-07-14T22:43:02.662721175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.664219 env[1320]: time="2025-07-14T22:43:02.664173487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.665573 env[1320]: time="2025-07-14T22:43:02.665540183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.666903 env[1320]: time="2025-07-14T22:43:02.666875179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.668193 env[1320]: time="2025-07-14T22:43:02.668165948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.673670 env[1320]: time="2025-07-14T22:43:02.673624698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:02.724008 env[1320]: time="2025-07-14T22:43:02.723897537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:02.724182 env[1320]: time="2025-07-14T22:43:02.724012770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:02.724182 env[1320]: time="2025-07-14T22:43:02.724050674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:02.724454 env[1320]: time="2025-07-14T22:43:02.724374441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10a138ecb7694a25ffd37cf1534c193fd3baea1df4681196fa28ca577d92d350 pid=1905 runtime=io.containerd.runc.v2 Jul 14 22:43:02.729556 env[1320]: time="2025-07-14T22:43:02.729444137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:02.729850 env[1320]: time="2025-07-14T22:43:02.729523060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:02.729850 env[1320]: time="2025-07-14T22:43:02.729568248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:02.729943 env[1320]: time="2025-07-14T22:43:02.729866495Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/deaa8a0180f6eb1a5314fd8c54cb0269675276abff849aaa3982777f75b99b91 pid=1918 runtime=io.containerd.runc.v2 Jul 14 22:43:02.736935 kubelet[1862]: I0714 22:43:02.736908 1862 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:02.737308 kubelet[1862]: E0714 22:43:02.737285 1862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:43:02.773994 env[1320]: time="2025-07-14T22:43:02.772451177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:02.773994 env[1320]: time="2025-07-14T22:43:02.772596418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:02.773994 env[1320]: time="2025-07-14T22:43:02.772636615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:02.773994 env[1320]: time="2025-07-14T22:43:02.772784281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da5f24e0ae1c67221cc85c0482686ff8858e9fef7b6805d9286a643cbb5f134d pid=1959 runtime=io.containerd.runc.v2 Jul 14 22:43:02.957632 env[1320]: time="2025-07-14T22:43:02.956650621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5f24e0ae1c67221cc85c0482686ff8858e9fef7b6805d9286a643cbb5f134d\"" Jul 14 22:43:02.958872 kubelet[1862]: E0714 22:43:02.958834 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:02.961787 env[1320]: time="2025-07-14T22:43:02.961738533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"deaa8a0180f6eb1a5314fd8c54cb0269675276abff849aaa3982777f75b99b91\"" Jul 14 22:43:02.962774 kubelet[1862]: E0714 22:43:02.962722 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:02.962922 env[1320]: time="2025-07-14T22:43:02.962882248Z" level=info msg="CreateContainer within sandbox \"da5f24e0ae1c67221cc85c0482686ff8858e9fef7b6805d9286a643cbb5f134d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:43:02.963589 env[1320]: time="2025-07-14T22:43:02.963449757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8861c4811c8364d34a961e9db0c049c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"10a138ecb7694a25ffd37cf1534c193fd3baea1df4681196fa28ca577d92d350\"" Jul 14 22:43:02.965153 kubelet[1862]: E0714 22:43:02.965124 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:02.966350 env[1320]: time="2025-07-14T22:43:02.966310807Z" level=info msg="CreateContainer within sandbox \"deaa8a0180f6eb1a5314fd8c54cb0269675276abff849aaa3982777f75b99b91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:43:02.966888 env[1320]: time="2025-07-14T22:43:02.966848227Z" level=info msg="CreateContainer within sandbox \"10a138ecb7694a25ffd37cf1534c193fd3baea1df4681196fa28ca577d92d350\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:43:03.172661 env[1320]: time="2025-07-14T22:43:03.172576204Z" level=info msg="CreateContainer within sandbox \"da5f24e0ae1c67221cc85c0482686ff8858e9fef7b6805d9286a643cbb5f134d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f69c920ca660fac158b859c9411c100812850b7b6b7dd13dfa9cfe7a918373cc\"" Jul 14 22:43:03.173401 env[1320]: time="2025-07-14T22:43:03.173367025Z" level=info msg="StartContainer for \"f69c920ca660fac158b859c9411c100812850b7b6b7dd13dfa9cfe7a918373cc\"" Jul 14 22:43:03.179850 env[1320]: time="2025-07-14T22:43:03.179810060Z" level=info msg="CreateContainer within sandbox \"10a138ecb7694a25ffd37cf1534c193fd3baea1df4681196fa28ca577d92d350\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbbd878e1c6bc12e9b9164fd93f84dce9031a620b9ac13c9cc10469df0a290dc\"" Jul 14 22:43:03.180346 env[1320]: time="2025-07-14T22:43:03.180310018Z" level=info msg="StartContainer for \"fbbd878e1c6bc12e9b9164fd93f84dce9031a620b9ac13c9cc10469df0a290dc\"" Jul 14 22:43:03.182259 env[1320]: time="2025-07-14T22:43:03.182230683Z" level=info msg="CreateContainer within sandbox \"deaa8a0180f6eb1a5314fd8c54cb0269675276abff849aaa3982777f75b99b91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5c43fc24567d1b51b8fa7282f4284b396256bb7945c573b68482671e91297ce\"" Jul 14 22:43:03.182659 env[1320]: time="2025-07-14T22:43:03.182639134Z" level=info msg="StartContainer for \"c5c43fc24567d1b51b8fa7282f4284b396256bb7945c573b68482671e91297ce\"" Jul 14 22:43:03.252087 env[1320]: time="2025-07-14T22:43:03.251932117Z" level=info msg="StartContainer for \"f69c920ca660fac158b859c9411c100812850b7b6b7dd13dfa9cfe7a918373cc\" returns successfully" Jul 14 22:43:03.267713 env[1320]: time="2025-07-14T22:43:03.267661229Z" level=info msg="StartContainer for \"c5c43fc24567d1b51b8fa7282f4284b396256bb7945c573b68482671e91297ce\" returns successfully" Jul 14 22:43:03.283531 env[1320]: time="2025-07-14T22:43:03.283459344Z" level=info msg="StartContainer for \"fbbd878e1c6bc12e9b9164fd93f84dce9031a620b9ac13c9cc10469df0a290dc\" returns successfully" Jul 14 22:43:03.740845 kubelet[1862]: E0714 22:43:03.740809 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:03.742174 kubelet[1862]: E0714 22:43:03.742150 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:03.743366 kubelet[1862]: E0714 22:43:03.743335 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:04.339080 kubelet[1862]: I0714 22:43:04.339041 1862 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:04.634475 kubelet[1862]: E0714 22:43:04.634360 1862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:43:04.714257 kubelet[1862]: I0714 22:43:04.714213 1862 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:43:04.714257 kubelet[1862]: E0714 22:43:04.714259 1862 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:43:04.741053 kubelet[1862]: E0714 22:43:04.741010 1862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:04.745595 kubelet[1862]: E0714 22:43:04.745555 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:05.667012 kubelet[1862]: I0714 22:43:05.666971 1862 apiserver.go:52] "Watching apiserver" Jul 14 22:43:05.712566 kubelet[1862]: I0714 22:43:05.712530 1862 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:43:05.939327 kubelet[1862]: E0714 22:43:05.939211 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:06.747403 kubelet[1862]: E0714 22:43:06.747362 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:07.206263 systemd[1]: Reloading. Jul 14 22:43:07.278218 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2025-07-14T22:43:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:43:07.278246 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2025-07-14T22:43:07Z" level=info msg="torcx already run" Jul 14 22:43:07.350381 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:43:07.350399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:43:07.373053 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:43:07.445784 systemd[1]: Stopping kubelet.service... Jul 14 22:43:07.465366 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:43:07.465877 systemd[1]: Stopped kubelet.service. Jul 14 22:43:07.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:07.466744 kernel: kauditd_printk_skb: 41 callbacks suppressed Jul 14 22:43:07.466792 kernel: audit: type=1131 audit(1752532987.464:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:07.467500 systemd[1]: Starting kubelet.service... Jul 14 22:43:07.555493 systemd[1]: Started kubelet.service. Jul 14 22:43:07.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:07.559995 kernel: audit: type=1130 audit(1752532987.554:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:07.590038 kubelet[2215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:43:07.590038 kubelet[2215]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:43:07.590038 kubelet[2215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:43:07.590422 kubelet[2215]: I0714 22:43:07.590071 2215 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:43:07.596982 kubelet[2215]: I0714 22:43:07.596930 2215 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:43:07.596982 kubelet[2215]: I0714 22:43:07.596975 2215 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:43:07.597250 kubelet[2215]: I0714 22:43:07.597227 2215 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:43:07.598974 kubelet[2215]: I0714 22:43:07.598547 2215 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:43:07.600224 kubelet[2215]: I0714 22:43:07.600195 2215 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:43:07.603077 kubelet[2215]: E0714 22:43:07.603052 2215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:43:07.603077 kubelet[2215]: I0714 22:43:07.603075 2215 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:43:07.607813 kubelet[2215]: I0714 22:43:07.607798 2215 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:43:07.608364 kubelet[2215]: I0714 22:43:07.608350 2215 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:43:07.608540 kubelet[2215]: I0714 22:43:07.608517 2215 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:43:07.608766 kubelet[2215]: I0714 22:43:07.608607 2215 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:43:07.608904 kubelet[2215]: I0714 22:43:07.608889 2215 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:43:07.608986 kubelet[2215]: I0714 22:43:07.608973 2215 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:43:07.609084 kubelet[2215]: I0714 22:43:07.609072 2215 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:43:07.609240 kubelet[2215]: I0714 22:43:07.609230 2215 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:43:07.609888 kubelet[2215]: I0714 22:43:07.609876 2215 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:43:07.610039 kubelet[2215]: I0714 22:43:07.610027 2215 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:43:07.611004 kubelet[2215]: I0714 22:43:07.610985 2215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:43:07.611661 kubelet[2215]: I0714 22:43:07.611638 2215 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 22:43:07.612019 kubelet[2215]: I0714 22:43:07.611999 2215 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:43:07.612386 kubelet[2215]: I0714 22:43:07.612367 2215 server.go:1274] "Started kubelet" Jul 14 22:43:07.614016 kubelet[2215]: I0714 22:43:07.613991 2215 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 14 22:43:07.614076 kubelet[2215]: I0714 22:43:07.614026 2215 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 14 22:43:07.614076 kubelet[2215]: I0714 22:43:07.614051 2215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:43:07.612000 audit[2215]: AVC avc: denied { mac_admin } for pid=2215 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:07.612000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:07.619063 kernel: audit: type=1400 audit(1752532987.612:248): avc: denied { mac_admin } for pid=2215 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:07.621023 kernel: audit: type=1401 audit(1752532987.612:248): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:07.621081 kernel: audit: type=1300 audit(1752532987.612:248): arch=c000003e syscall=188 success=no exit=-22 a0=c00004bce0 a1=c000733020 a2=c00004bcb0 a3=25 items=0 ppid=1 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:07.612000 audit[2215]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00004bce0 a1=c000733020 a2=c00004bcb0 a3=25 items=0 ppid=1 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:07.621329 kubelet[2215]: I0714 22:43:07.621125 2215 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:43:07.621989 kubelet[2215]: I0714 22:43:07.621970 2215 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:43:07.622738 kubelet[2215]: I0714 22:43:07.622706 2215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:43:07.622888 kubelet[2215]: I0714 22:43:07.622867 2215 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:43:07.623192 kubelet[2215]: I0714 22:43:07.623172 2215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:43:07.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:07.627760 kubelet[2215]: E0714 22:43:07.626907 2215 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:43:07.627760 kubelet[2215]: I0714 22:43:07.627010 2215 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:43:07.627760 kubelet[2215]: I0714 22:43:07.627197 2215 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:43:07.627760 kubelet[2215]: I0714 22:43:07.627346 2215 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:43:07.628292 kubelet[2215]: E0714 22:43:07.628273 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:43:07.628538 kernel: audit: type=1327 audit(1752532987.612:248): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:07.631854 kernel: audit: type=1400 audit(1752532987.613:249): avc: denied { mac_admin } for pid=2215 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:07.613000 audit[2215]: AVC avc: denied { mac_admin } for pid=2215 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:07.631985 kubelet[2215]: I0714 22:43:07.631532 2215 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:43:07.631985 kubelet[2215]: I0714 22:43:07.631541 2215 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:43:07.631985 kubelet[2215]: I0714 22:43:07.631625 2215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:43:07.638492 kernel: audit: type=1401 audit(1752532987.613:249): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:07.638521 kernel: audit: type=1300 audit(1752532987.613:249): arch=c000003e syscall=188 success=no exit=-22 a0=c0004acda0 a1=c000733038 a2=c00004bd70 a3=25 items=0 ppid=1 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:07.613000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:07.613000 audit[2215]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0004acda0 a1=c000733038 a2=c00004bd70 a3=25 items=0 ppid=1 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:07.642895 kernel: audit: type=1327 audit(1752532987.613:249): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:07.613000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:07.646222 kubelet[2215]: I0714 22:43:07.646064 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:43:07.648130 kubelet[2215]: I0714 22:43:07.648110 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:43:07.648217 kubelet[2215]: I0714 22:43:07.648202 2215 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:43:07.648296 kubelet[2215]: I0714 22:43:07.648283 2215 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:43:07.650071 kubelet[2215]: E0714 22:43:07.650045 2215 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:43:07.678099 kubelet[2215]: I0714 22:43:07.678074 2215 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:43:07.678234 kubelet[2215]: I0714 22:43:07.678218 2215 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:43:07.678312 kubelet[2215]: I0714 22:43:07.678299 2215 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:43:07.678520 kubelet[2215]: I0714 22:43:07.678506 2215 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:43:07.678616 kubelet[2215]: I0714 22:43:07.678575 2215 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:43:07.678684 kubelet[2215]: I0714 22:43:07.678671 2215 policy_none.go:49] "None policy: Start" Jul 14 22:43:07.679212 kubelet[2215]: I0714 22:43:07.679179 2215 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:43:07.679276 kubelet[2215]: I0714 22:43:07.679214 2215 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:43:07.679415 kubelet[2215]: I0714 22:43:07.679387 2215 state_mem.go:75] "Updated machine memory state" Jul 14 22:43:07.684000 audit[2215]: AVC avc: denied { mac_admin } for pid=2215 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:43:07.684000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 14 22:43:07.684000 audit[2215]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00142f1d0 a1=c000d87458 a2=c00142f1a0 a3=25 items=0 ppid=1 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:07.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 14 22:43:07.686410 kubelet[2215]: I0714 22:43:07.685902 2215 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:43:07.686410 kubelet[2215]: I0714 22:43:07.686009 2215 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 14 22:43:07.686410 kubelet[2215]: I0714 22:43:07.686253 2215 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:43:07.686410 kubelet[2215]: I0714 22:43:07.686267 2215 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:43:07.686669 kubelet[2215]: I0714 22:43:07.686643 2215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:43:07.758285 kubelet[2215]: E0714 22:43:07.758154 2215 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:07.790606 kubelet[2215]: I0714 22:43:07.790555 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:43:07.796757 kubelet[2215]: I0714 22:43:07.796581 2215 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:43:07.796757 kubelet[2215]: I0714 22:43:07.796683 2215 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:43:07.827654 kubelet[2215]: I0714 22:43:07.827601 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:07.827654 kubelet[2215]: I0714 22:43:07.827644 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:07.827868 kubelet[2215]: I0714 22:43:07.827667 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:07.827868 kubelet[2215]: I0714 22:43:07.827691 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:43:07.827868 kubelet[2215]: I0714 22:43:07.827728 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8861c4811c8364d34a961e9db0c049c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8861c4811c8364d34a961e9db0c049c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:07.827868 kubelet[2215]: I0714 22:43:07.827772 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:07.827868 kubelet[2215]: I0714 22:43:07.827803 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:07.828077 kubelet[2215]: I0714 22:43:07.827827 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:07.828077 kubelet[2215]: I0714 22:43:07.827845 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:43:08.057371 kubelet[2215]: E0714 22:43:08.057341 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.058328 kubelet[2215]: E0714 22:43:08.058309 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.058455 kubelet[2215]: E0714 22:43:08.058443 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.612280 kubelet[2215]: I0714 22:43:08.612237 2215 apiserver.go:52] "Watching apiserver" Jul 14 22:43:08.628033 kubelet[2215]: I0714 22:43:08.627999 2215 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:43:08.664731 kubelet[2215]: E0714 22:43:08.664661 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.664864 kubelet[2215]: E0714 22:43:08.664836 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.802088 kubelet[2215]: E0714 22:43:08.802015 2215 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:43:08.802299 kubelet[2215]: E0714 22:43:08.802268 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:09.665442 kubelet[2215]: E0714 22:43:09.665397 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:09.956607 kubelet[2215]: I0714 22:43:09.956474 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.9564341039999995 podStartE2EDuration="4.956434104s" podCreationTimestamp="2025-07-14 22:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:43:09.132210029 +0000 UTC m=+1.573552172" watchObservedRunningTime="2025-07-14 22:43:09.956434104 +0000 UTC m=+2.397776247" Jul 14 22:43:10.539526 kubelet[2215]: I0714 22:43:10.539465 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.5394442699999997 podStartE2EDuration="3.53944427s" podCreationTimestamp="2025-07-14 22:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:43:09.957381808 +0000 UTC m=+2.398723951" watchObservedRunningTime="2025-07-14 22:43:10.53944427 +0000 UTC m=+2.980786413" Jul 14 22:43:10.667067 kubelet[2215]: E0714 22:43:10.667039 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:10.879394 kubelet[2215]: I0714 22:43:10.879327 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.879307107 podStartE2EDuration="3.879307107s" podCreationTimestamp="2025-07-14 22:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:43:10.540044325 +0000 UTC m=+2.981386458" watchObservedRunningTime="2025-07-14 22:43:10.879307107 +0000 UTC m=+3.320649250" Jul 14 22:43:11.097298 kubelet[2215]: E0714 22:43:11.097265 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:12.394103 kubelet[2215]: I0714 22:43:12.394057 2215 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:43:12.394515 env[1320]: time="2025-07-14T22:43:12.394376218Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:43:12.394744 kubelet[2215]: I0714 22:43:12.394606 2215 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:43:12.661747 kubelet[2215]: I0714 22:43:12.661603 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45hx\" (UniqueName: \"kubernetes.io/projected/f291271e-db5c-40f1-958b-5866c3db0d0e-kube-api-access-c45hx\") pod \"kube-proxy-6s9sl\" (UID: \"f291271e-db5c-40f1-958b-5866c3db0d0e\") " pod="kube-system/kube-proxy-6s9sl" Jul 14 22:43:12.661747 kubelet[2215]: I0714 22:43:12.661643 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f291271e-db5c-40f1-958b-5866c3db0d0e-xtables-lock\") pod \"kube-proxy-6s9sl\" (UID: \"f291271e-db5c-40f1-958b-5866c3db0d0e\") " pod="kube-system/kube-proxy-6s9sl" Jul 14 22:43:12.661747 kubelet[2215]: I0714 22:43:12.661662 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f291271e-db5c-40f1-958b-5866c3db0d0e-kube-proxy\") pod \"kube-proxy-6s9sl\" (UID: \"f291271e-db5c-40f1-958b-5866c3db0d0e\") " pod="kube-system/kube-proxy-6s9sl" Jul 14 22:43:12.661747 kubelet[2215]: I0714 22:43:12.661710 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f291271e-db5c-40f1-958b-5866c3db0d0e-lib-modules\") pod \"kube-proxy-6s9sl\" (UID: \"f291271e-db5c-40f1-958b-5866c3db0d0e\") " pod="kube-system/kube-proxy-6s9sl" Jul 14 22:43:12.793473 kubelet[2215]: I0714 22:43:12.793438 2215 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 22:43:12.923622 kubelet[2215]: E0714 22:43:12.923496 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:12.924204 env[1320]: time="2025-07-14T22:43:12.924159126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6s9sl,Uid:f291271e-db5c-40f1-958b-5866c3db0d0e,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:13.440924 env[1320]: time="2025-07-14T22:43:13.440852124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:13.441260 env[1320]: time="2025-07-14T22:43:13.440897531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:13.441260 env[1320]: time="2025-07-14T22:43:13.440909825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:13.441260 env[1320]: time="2025-07-14T22:43:13.441044614Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a90b823acb95e3a0285eb40fd7edb9e8466554d82794c043dd63c6c6bb1fa356 pid=2272 runtime=io.containerd.runc.v2 Jul 14 22:43:13.470374 env[1320]: time="2025-07-14T22:43:13.470340783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6s9sl,Uid:f291271e-db5c-40f1-958b-5866c3db0d0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a90b823acb95e3a0285eb40fd7edb9e8466554d82794c043dd63c6c6bb1fa356\"" Jul 14 22:43:13.471025 kubelet[2215]: E0714 22:43:13.471001 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:13.473448 env[1320]: time="2025-07-14T22:43:13.473415529Z" level=info msg="CreateContainer within sandbox \"a90b823acb95e3a0285eb40fd7edb9e8466554d82794c043dd63c6c6bb1fa356\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:43:13.701701 env[1320]: time="2025-07-14T22:43:13.701304352Z" level=info msg="CreateContainer within sandbox \"a90b823acb95e3a0285eb40fd7edb9e8466554d82794c043dd63c6c6bb1fa356\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16648df66646495b8e57df583bd9b782bf2254b581b115e1279c1359f60a459c\"" Jul 14 22:43:13.702146 env[1320]: time="2025-07-14T22:43:13.702110099Z" level=info msg="StartContainer for \"16648df66646495b8e57df583bd9b782bf2254b581b115e1279c1359f60a459c\"" Jul 14 22:43:13.750333 env[1320]: time="2025-07-14T22:43:13.750284709Z" level=info msg="StartContainer for \"16648df66646495b8e57df583bd9b782bf2254b581b115e1279c1359f60a459c\" returns successfully" Jul 14 22:43:13.768113 kubelet[2215]: I0714 22:43:13.768060 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1b69413-61fd-4f25-9e63-2ab1340fbfe1-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-p4sgd\" (UID: \"c1b69413-61fd-4f25-9e63-2ab1340fbfe1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-p4sgd" Jul 14 22:43:13.768113 kubelet[2215]: I0714 22:43:13.768107 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvs4\" (UniqueName: \"kubernetes.io/projected/c1b69413-61fd-4f25-9e63-2ab1340fbfe1-kube-api-access-bnvs4\") pod \"tigera-operator-5bf8dfcb4-p4sgd\" (UID: \"c1b69413-61fd-4f25-9e63-2ab1340fbfe1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-p4sgd" Jul 14 22:43:13.859011 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 14 22:43:13.859123 kernel: audit: type=1325 audit(1752532993.854:251): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.859148 kernel: audit: type=1300 audit(1752532993.854:251): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffa31e850 a2=0 a3=7ffffa31e83c items=0 ppid=2323 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.854000 audit[2373]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.854000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffa31e850 a2=0 a3=7ffffa31e83c items=0 ppid=2323 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.866515 kernel: audit: type=1327 audit(1752532993.854:251): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 22:43:13.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 22:43:13.868980 kernel: audit: type=1325 audit(1752532993.854:252): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:13.854000 audit[2374]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:13.854000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1da4e040 a2=0 a3=7ffd1da4e02c items=0 ppid=2323 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.873537 kernel: audit: type=1300 audit(1752532993.854:252): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1da4e040 a2=0 a3=7ffd1da4e02c items=0 ppid=2323 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.873669 kernel: audit: type=1327 audit(1752532993.854:252): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 22:43:13.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 14 22:43:13.859000 audit[2376]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:13.878393 kernel: audit: type=1325 audit(1752532993.859:253): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:13.878476 kernel: audit: type=1300 audit(1752532993.859:253): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0567d280 a2=0 a3=7ffe0567d26c items=0 ppid=2323 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.859000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0567d280 a2=0 a3=7ffe0567d26c items=0 ppid=2323 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 22:43:13.884928 kernel: audit: type=1327 audit(1752532993.859:253): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 22:43:13.860000 audit[2377]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.860000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd48778c40 a2=0 a3=7ffd48778c2c items=0 ppid=2323 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 14 22:43:13.861000 audit[2379]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.861000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc040febb0 a2=0 a3=7ffc040feb9c items=0 ppid=2323 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 14 22:43:13.861000 audit[2378]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:13.861000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff25600bd0 a2=0 a3=7fff25600bbc items=0 ppid=2323 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 14 22:43:13.887994 kernel: audit: type=1325 audit(1752532993.860:254): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.956000 audit[2381]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.956000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec2b1f210 a2=0 a3=7ffec2b1f1fc items=0 ppid=2323 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 14 22:43:13.959000 audit[2383]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.959000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbf8ffba0 a2=0 a3=7fffbf8ffb8c items=0 ppid=2323 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 14 22:43:13.962000 audit[2386]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.962000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7780d120 a2=0 a3=7ffd7780d10c items=0 ppid=2323 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 14 22:43:13.963000 audit[2387]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.963000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef8b2f540 a2=0 a3=7ffef8b2f52c items=0 ppid=2323 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 14 22:43:13.965000 audit[2389]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.965000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeca9f12a0 a2=0 a3=7ffeca9f128c items=0 ppid=2323 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 14 22:43:13.966000 audit[2390]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.966000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4da94220 a2=0 a3=7ffd4da9420c items=0 ppid=2323 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 14 22:43:13.968000 audit[2392]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.968000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff391a17a0 a2=0 a3=7fff391a178c items=0 ppid=2323 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 14 22:43:13.972000 audit[2395]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.972000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6bd55ee0 a2=0 a3=7ffd6bd55ecc items=0 ppid=2323 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 14 22:43:13.972000 audit[2396]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.972000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa9ecc5a0 a2=0 a3=7fffa9ecc58c items=0 ppid=2323 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 14 22:43:13.975000 audit[2398]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.975000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda74b2d00 a2=0 a3=7ffda74b2cec items=0 ppid=2323 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 14 22:43:13.976000 audit[2399]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.976000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7b6fbc90 a2=0 a3=7ffc7b6fbc7c items=0 ppid=2323 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 14 22:43:13.978000 audit[2401]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.978000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe471e9ec0 a2=0 a3=7ffe471e9eac items=0 ppid=2323 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 22:43:13.981000 audit[2404]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.981000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7f9575d0 a2=0 a3=7ffc7f9575bc items=0 ppid=2323 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.981000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 22:43:13.983000 audit[2407]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.983000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3d247c70 a2=0 a3=7ffd3d247c5c items=0 ppid=2323 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 14 22:43:13.984000 audit[2408]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.984000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcb1dc0ca0 a2=0 a3=7ffcb1dc0c8c items=0 ppid=2323 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.984000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 14 22:43:13.986000 audit[2410]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.986000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff672adb00 a2=0 a3=7fff672adaec items=0 ppid=2323 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 22:43:13.989000 audit[2413]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.989000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee23a4140 a2=0 a3=7ffee23a412c items=0 ppid=2323 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 22:43:13.990000 audit[2414]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.990000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed3efe830 a2=0 a3=7ffed3efe81c items=0 ppid=2323 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 14 22:43:13.992000 audit[2416]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 14 22:43:13.992000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc9b260790 a2=0 a3=7ffc9b26077c items=0 ppid=2323 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:13.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 14 22:43:14.009004 env[1320]: time="2025-07-14T22:43:14.008943122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-p4sgd,Uid:c1b69413-61fd-4f25-9e63-2ab1340fbfe1,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:43:14.013000 audit[2422]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:14.013000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcdb60af70 a2=0 a3=7ffcdb60af5c items=0 ppid=2323 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:14.025000 audit[2422]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:14.025000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcdb60af70 a2=0 a3=7ffcdb60af5c items=0 ppid=2323 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:14.027000 audit[2435]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.027000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdea565dc0 a2=0 a3=7ffdea565dac items=0 ppid=2323 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 14 22:43:14.033000 audit[2438]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.033000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdddfef4c0 a2=0 a3=7ffdddfef4ac items=0 ppid=2323 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 14 22:43:14.043000 audit[2445]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.043000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7fc36d80 a2=0 a3=7ffe7fc36d6c items=0 ppid=2323 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 14 22:43:14.044000 audit[2448]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.044000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb882f230 a2=0 a3=7ffdb882f21c items=0 ppid=2323 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 14 22:43:14.046000 audit[2450]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.046000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffcae1b780 a2=0 a3=7fffcae1b76c items=0 ppid=2323 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 14 22:43:14.047000 audit[2451]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.047000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed143ff60 a2=0 a3=7ffed143ff4c items=0 ppid=2323 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 14 22:43:14.049000 audit[2453]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.049000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd760583c0 a2=0 a3=7ffd760583ac items=0 ppid=2323 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 14 22:43:14.052000 audit[2456]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.052000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd973eca60 a2=0 a3=7ffd973eca4c items=0 ppid=2323 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 14 22:43:14.054000 audit[2457]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.054000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd9d188f0 a2=0 a3=7fffd9d188dc items=0 ppid=2323 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 14 22:43:14.056000 audit[2459]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.056000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2afa9970 a2=0 a3=7ffd2afa995c items=0 ppid=2323 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 14 22:43:14.057000 audit[2460]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.057000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0709a0d0 a2=0 a3=7fff0709a0bc items=0 ppid=2323 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 14 22:43:14.060000 audit[2462]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.060000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6224fbe0 a2=0 a3=7ffe6224fbcc items=0 ppid=2323 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 14 22:43:14.064000 audit[2465]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.064000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9562fad0 a2=0 a3=7ffd9562fabc items=0 ppid=2323 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 14 22:43:14.066734 env[1320]: time="2025-07-14T22:43:14.066618934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:14.066734 env[1320]: time="2025-07-14T22:43:14.066678197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:14.066933 env[1320]: time="2025-07-14T22:43:14.066704417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:14.067055 env[1320]: time="2025-07-14T22:43:14.066925431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd1c563b6cab70f9e2ee2158d304d1ceb5e2f3fdce529043901119c9502eca80 pid=2436 runtime=io.containerd.runc.v2 Jul 14 22:43:14.068000 audit[2468]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.068000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebf0c6b40 a2=0 a3=7ffebf0c6b2c items=0 ppid=2323 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 14 22:43:14.070000 audit[2471]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.070000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9242dda0 a2=0 a3=7ffc9242dd8c items=0 ppid=2323 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 14 22:43:14.073000 audit[2473]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.073000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc68c4d860 a2=0 a3=7ffc68c4d84c items=0 ppid=2323 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 22:43:14.077000 audit[2482]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.077000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea9c5f3c0 a2=0 a3=7ffea9c5f3ac items=0 ppid=2323 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 14 22:43:14.080000 audit[2483]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.080000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc55b2db80 a2=0 a3=7ffc55b2db6c items=0 ppid=2323 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 14 22:43:14.083000 audit[2485]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.083000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffebf816250 a2=0 a3=7ffebf81623c items=0 ppid=2323 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 14 22:43:14.084000 audit[2487]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.084000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3f200890 a2=0 a3=7fff3f20087c items=0 ppid=2323 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.084000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 14 22:43:14.086000 audit[2491]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.086000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe1ed975a0 a2=0 a3=7ffe1ed9758c items=0 ppid=2323 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 22:43:14.092000 audit[2494]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 14 22:43:14.092000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe33b6f80 a2=0 a3=7fffe33b6f6c items=0 ppid=2323 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 14 22:43:14.097000 audit[2496]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 14 22:43:14.097000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffe1e27f20 a2=0 a3=7fffe1e27f0c items=0 ppid=2323 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.097000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:14.098000 audit[2496]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 14 22:43:14.098000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffe1e27f20 a2=0 a3=7fffe1e27f0c items=0 ppid=2323 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:14.098000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:14.195982 env[1320]: time="2025-07-14T22:43:14.195916607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-p4sgd,Uid:c1b69413-61fd-4f25-9e63-2ab1340fbfe1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fd1c563b6cab70f9e2ee2158d304d1ceb5e2f3fdce529043901119c9502eca80\"" Jul 14 22:43:14.197739 env[1320]: time="2025-07-14T22:43:14.197711432Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:43:14.323402 kubelet[2215]: E0714 22:43:14.323358 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:14.675300 kubelet[2215]: E0714 22:43:14.675162 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:14.676291 kubelet[2215]: E0714 22:43:14.676263 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:14.685072 kubelet[2215]: I0714 22:43:14.684888 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6s9sl" podStartSLOduration=2.684866402 podStartE2EDuration="2.684866402s" podCreationTimestamp="2025-07-14 22:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:43:14.684463348 +0000 UTC m=+7.125805491" watchObservedRunningTime="2025-07-14 22:43:14.684866402 +0000 UTC m=+7.126208545" Jul 14 22:43:15.371594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306614876.mount: Deactivated successfully. Jul 14 22:43:15.677786 kubelet[2215]: E0714 22:43:15.677653 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:17.688895 env[1320]: time="2025-07-14T22:43:17.688840517Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:17.691785 env[1320]: time="2025-07-14T22:43:17.691722891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:17.696756 env[1320]: time="2025-07-14T22:43:17.696677225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:17.699875 env[1320]: time="2025-07-14T22:43:17.699804587Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:17.700603 env[1320]: time="2025-07-14T22:43:17.700568571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:43:17.703579 env[1320]: time="2025-07-14T22:43:17.703112767Z" level=info msg="CreateContainer within sandbox \"fd1c563b6cab70f9e2ee2158d304d1ceb5e2f3fdce529043901119c9502eca80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:43:18.091979 env[1320]: time="2025-07-14T22:43:18.091870902Z" level=info msg="CreateContainer within sandbox \"fd1c563b6cab70f9e2ee2158d304d1ceb5e2f3fdce529043901119c9502eca80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7f4caa1abd8d027c96e0f4f673ab85e877fc59302c8dd6be59342411d04c8c55\"" Jul 14 22:43:18.093236 env[1320]: time="2025-07-14T22:43:18.092549592Z" level=info msg="StartContainer for \"7f4caa1abd8d027c96e0f4f673ab85e877fc59302c8dd6be59342411d04c8c55\"" Jul 14 22:43:18.137250 env[1320]: time="2025-07-14T22:43:18.137196020Z" level=info msg="StartContainer for \"7f4caa1abd8d027c96e0f4f673ab85e877fc59302c8dd6be59342411d04c8c55\" returns successfully" Jul 14 22:43:18.690897 kubelet[2215]: I0714 22:43:18.690810 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-p4sgd" podStartSLOduration=2.186466122 podStartE2EDuration="5.690794923s" podCreationTimestamp="2025-07-14 22:43:13 +0000 UTC" firstStartedPulling="2025-07-14 22:43:14.197272349 +0000 UTC m=+6.638614492" lastFinishedPulling="2025-07-14 22:43:17.70160116 +0000 UTC m=+10.142943293" observedRunningTime="2025-07-14 22:43:18.690535876 +0000 UTC m=+11.131878020" watchObservedRunningTime="2025-07-14 22:43:18.690794923 +0000 UTC m=+11.132137066" Jul 14 22:43:20.643300 kubelet[2215]: E0714 22:43:20.643243 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:21.101662 kubelet[2215]: E0714 22:43:21.101623 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:30.331328 sudo[1493]: pam_unix(sudo:session): session closed for user root Jul 14 22:43:30.330000 audit[1493]: USER_END pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.332819 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 14 22:43:30.332878 kernel: audit: type=1106 audit(1752533010.330:302): pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.330000 audit[1493]: CRED_DISP pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.340554 sshd[1487]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:30.341883 kernel: audit: type=1104 audit(1752533010.330:303): pid=1493 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.340000 audit[1487]: USER_END pid=1487 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:43:30.345356 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:41314.service: Deactivated successfully. Jul 14 22:43:30.340000 audit[1487]: CRED_DISP pid=1487 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:43:30.348723 systemd-logind[1309]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:43:30.348829 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:43:30.349643 systemd-logind[1309]: Removed session 7. Jul 14 22:43:30.354121 kernel: audit: type=1106 audit(1752533010.340:304): pid=1487 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:43:30.354222 kernel: audit: type=1104 audit(1752533010.340:305): pid=1487 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:43:30.354249 kernel: audit: type=1131 audit(1752533010.344:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:43:30.707000 audit[2607]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.707000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb8b0ef50 a2=0 a3=7fffb8b0ef3c items=0 ppid=2323 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.716520 kernel: audit: type=1325 audit(1752533010.707:307): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.716586 kernel: audit: type=1300 audit(1752533010.707:307): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb8b0ef50 a2=0 a3=7fffb8b0ef3c items=0 ppid=2323 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:30.718939 kernel: audit: type=1327 audit(1752533010.707:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:30.719000 audit[2607]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.719000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb8b0ef50 a2=0 a3=0 items=0 ppid=2323 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.728104 kernel: audit: type=1325 audit(1752533010.719:308): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.728173 kernel: audit: type=1300 audit(1752533010.719:308): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb8b0ef50 a2=0 a3=0 items=0 ppid=2323 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:30.742000 audit[2609]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.742000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffdac0e760 a2=0 a3=7fffdac0e74c items=0 ppid=2323 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:30.747000 audit[2609]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:30.747000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdac0e760 a2=0 a3=0 items=0 ppid=2323 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:30.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:33.445000 audit[2611]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:33.445000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd268177d0 a2=0 a3=7ffd268177bc items=0 ppid=2323 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:33.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:33.460000 audit[2611]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:33.460000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd268177d0 a2=0 a3=0 items=0 ppid=2323 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:33.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:33.476000 audit[2613]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:33.476000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff41efbf30 a2=0 a3=7fff41efbf1c items=0 ppid=2323 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:33.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:33.482000 audit[2613]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:33.482000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff41efbf30 a2=0 a3=0 items=0 ppid=2323 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:33.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:33.600342 kubelet[2215]: I0714 22:43:33.600275 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd46\" (UniqueName: \"kubernetes.io/projected/1c06113e-5cc5-4395-a917-b8dfd3ac779c-kube-api-access-hhd46\") pod \"calico-typha-649999b96f-ggqgh\" (UID: \"1c06113e-5cc5-4395-a917-b8dfd3ac779c\") " pod="calico-system/calico-typha-649999b96f-ggqgh" Jul 14 22:43:33.600342 kubelet[2215]: I0714 22:43:33.600332 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c06113e-5cc5-4395-a917-b8dfd3ac779c-tigera-ca-bundle\") pod \"calico-typha-649999b96f-ggqgh\" (UID: \"1c06113e-5cc5-4395-a917-b8dfd3ac779c\") " pod="calico-system/calico-typha-649999b96f-ggqgh" Jul 14 22:43:33.600342 kubelet[2215]: I0714 22:43:33.600353 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c06113e-5cc5-4395-a917-b8dfd3ac779c-typha-certs\") pod \"calico-typha-649999b96f-ggqgh\" (UID: \"1c06113e-5cc5-4395-a917-b8dfd3ac779c\") " pod="calico-system/calico-typha-649999b96f-ggqgh" Jul 14 22:43:33.741347 kubelet[2215]: E0714 22:43:33.741223 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:33.802027 kubelet[2215]: I0714 22:43:33.801978 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-cni-log-dir\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802224 kubelet[2215]: I0714 22:43:33.802041 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9394e25-5687-481c-a37c-0cefa75dbae1-node-certs\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802224 kubelet[2215]: I0714 22:43:33.802065 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-xtables-lock\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802224 kubelet[2215]: I0714 22:43:33.802083 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-policysync\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802224 kubelet[2215]: I0714 22:43:33.802098 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-cni-bin-dir\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802224 kubelet[2215]: I0714 22:43:33.802114 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9394e25-5687-481c-a37c-0cefa75dbae1-tigera-ca-bundle\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802423 kubelet[2215]: I0714 22:43:33.802179 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprz5\" (UniqueName: \"kubernetes.io/projected/d9394e25-5687-481c-a37c-0cefa75dbae1-kube-api-access-gprz5\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802423 kubelet[2215]: I0714 22:43:33.802267 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-cni-net-dir\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802423 kubelet[2215]: I0714 22:43:33.802296 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-var-lib-calico\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802423 kubelet[2215]: I0714 22:43:33.802341 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-var-run-calico\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802423 kubelet[2215]: I0714 22:43:33.802376 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-flexvol-driver-host\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.802599 kubelet[2215]: I0714 22:43:33.802403 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9394e25-5687-481c-a37c-0cefa75dbae1-lib-modules\") pod \"calico-node-7b699\" (UID: \"d9394e25-5687-481c-a37c-0cefa75dbae1\") " pod="calico-system/calico-node-7b699" Jul 14 22:43:33.859117 kubelet[2215]: E0714 22:43:33.859070 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:33.859525 env[1320]: time="2025-07-14T22:43:33.859485290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-649999b96f-ggqgh,Uid:1c06113e-5cc5-4395-a917-b8dfd3ac779c,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:33.888029 env[1320]: time="2025-07-14T22:43:33.887884747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:33.888029 env[1320]: time="2025-07-14T22:43:33.888010126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:33.888029 env[1320]: time="2025-07-14T22:43:33.888021387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:33.888485 env[1320]: time="2025-07-14T22:43:33.888336927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1e1ec0451e3d00ec9135092b7f1f547d00f0d631f8dd10f49709ef7180f99e pid=2622 runtime=io.containerd.runc.v2 Jul 14 22:43:33.902629 kubelet[2215]: I0714 22:43:33.902593 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26698740-5794-455a-b832-1e56047f0f19-kubelet-dir\") pod \"csi-node-driver-lx29x\" (UID: \"26698740-5794-455a-b832-1e56047f0f19\") " pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:33.902629 kubelet[2215]: I0714 22:43:33.902628 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/26698740-5794-455a-b832-1e56047f0f19-registration-dir\") pod \"csi-node-driver-lx29x\" (UID: \"26698740-5794-455a-b832-1e56047f0f19\") " pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:33.902824 kubelet[2215]: I0714 22:43:33.902712 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8ff\" (UniqueName: \"kubernetes.io/projected/26698740-5794-455a-b832-1e56047f0f19-kube-api-access-8b8ff\") pod \"csi-node-driver-lx29x\" (UID: \"26698740-5794-455a-b832-1e56047f0f19\") " pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:33.902824 kubelet[2215]: I0714 22:43:33.902777 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/26698740-5794-455a-b832-1e56047f0f19-socket-dir\") pod \"csi-node-driver-lx29x\" (UID: \"26698740-5794-455a-b832-1e56047f0f19\") " pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:33.902824 kubelet[2215]: I0714 22:43:33.902795 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/26698740-5794-455a-b832-1e56047f0f19-varrun\") pod \"csi-node-driver-lx29x\" (UID: \"26698740-5794-455a-b832-1e56047f0f19\") " pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:33.904148 kubelet[2215]: E0714 22:43:33.904118 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.904148 kubelet[2215]: W0714 22:43:33.904141 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.904266 kubelet[2215]: E0714 22:43:33.904164 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.904664 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.905652 kubelet[2215]: W0714 22:43:33.904679 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.904695 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.904911 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.905652 kubelet[2215]: W0714 22:43:33.904923 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.904943 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.905230 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.905652 kubelet[2215]: W0714 22:43:33.905239 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.905652 kubelet[2215]: E0714 22:43:33.905315 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.906283 kubelet[2215]: E0714 22:43:33.906099 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.906283 kubelet[2215]: W0714 22:43:33.906110 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.906283 kubelet[2215]: E0714 22:43:33.906180 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.906283 kubelet[2215]: E0714 22:43:33.906276 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.906283 kubelet[2215]: W0714 22:43:33.906285 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.906500 kubelet[2215]: E0714 22:43:33.906354 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.906500 kubelet[2215]: E0714 22:43:33.906462 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.906500 kubelet[2215]: W0714 22:43:33.906471 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.906500 kubelet[2215]: E0714 22:43:33.906484 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.909345 kubelet[2215]: E0714 22:43:33.906643 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.909345 kubelet[2215]: W0714 22:43:33.906654 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.909345 kubelet[2215]: E0714 22:43:33.906664 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.909345 kubelet[2215]: E0714 22:43:33.906847 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.909345 kubelet[2215]: W0714 22:43:33.906857 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.909345 kubelet[2215]: E0714 22:43:33.906882 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.913484 kubelet[2215]: E0714 22:43:33.913068 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.913484 kubelet[2215]: W0714 22:43:33.913091 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.913484 kubelet[2215]: E0714 22:43:33.913112 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.914078 kubelet[2215]: E0714 22:43:33.914051 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:33.914078 kubelet[2215]: W0714 22:43:33.914072 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:33.914138 kubelet[2215]: E0714 22:43:33.914087 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:33.928049 env[1320]: time="2025-07-14T22:43:33.928011081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b699,Uid:d9394e25-5687-481c-a37c-0cefa75dbae1,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:33.946245 env[1320]: time="2025-07-14T22:43:33.946193514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-649999b96f-ggqgh,Uid:1c06113e-5cc5-4395-a917-b8dfd3ac779c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce1e1ec0451e3d00ec9135092b7f1f547d00f0d631f8dd10f49709ef7180f99e\"" Jul 14 22:43:33.948445 kubelet[2215]: E0714 22:43:33.948405 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:33.949116 env[1320]: time="2025-07-14T22:43:33.948844820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:43:33.949116 env[1320]: time="2025-07-14T22:43:33.948915204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:43:33.949116 env[1320]: time="2025-07-14T22:43:33.949054399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:43:33.949333 env[1320]: time="2025-07-14T22:43:33.949291620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:43:33.949390 env[1320]: time="2025-07-14T22:43:33.949357585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392 pid=2676 runtime=io.containerd.runc.v2 Jul 14 22:43:33.980675 env[1320]: time="2025-07-14T22:43:33.980631162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b699,Uid:d9394e25-5687-481c-a37c-0cefa75dbae1,Namespace:calico-system,Attempt:0,} returns sandbox id \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\"" Jul 14 22:43:34.003669 kubelet[2215]: E0714 22:43:34.003576 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.003669 kubelet[2215]: W0714 22:43:34.003596 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.003669 kubelet[2215]: E0714 22:43:34.003613 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.003986 kubelet[2215]: E0714 22:43:34.003953 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.003986 kubelet[2215]: W0714 22:43:34.003983 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.004074 kubelet[2215]: E0714 22:43:34.003997 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.004196 kubelet[2215]: E0714 22:43:34.004171 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.004196 kubelet[2215]: W0714 22:43:34.004181 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.004196 kubelet[2215]: E0714 22:43:34.004190 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.004408 kubelet[2215]: E0714 22:43:34.004391 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.004408 kubelet[2215]: W0714 22:43:34.004401 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.004408 kubelet[2215]: E0714 22:43:34.004409 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.004717 kubelet[2215]: E0714 22:43:34.004557 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.004717 kubelet[2215]: W0714 22:43:34.004566 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.004717 kubelet[2215]: E0714 22:43:34.004574 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.004853 kubelet[2215]: E0714 22:43:34.004826 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.004853 kubelet[2215]: W0714 22:43:34.004834 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.004853 kubelet[2215]: E0714 22:43:34.004844 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005033 kubelet[2215]: E0714 22:43:34.005018 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005033 kubelet[2215]: W0714 22:43:34.005028 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005116 kubelet[2215]: E0714 22:43:34.005090 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005184 kubelet[2215]: E0714 22:43:34.005170 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005184 kubelet[2215]: W0714 22:43:34.005179 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005259 kubelet[2215]: E0714 22:43:34.005238 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005332 kubelet[2215]: E0714 22:43:34.005318 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005332 kubelet[2215]: W0714 22:43:34.005328 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005399 kubelet[2215]: E0714 22:43:34.005337 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005588 kubelet[2215]: E0714 22:43:34.005562 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005588 kubelet[2215]: W0714 22:43:34.005573 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005718 kubelet[2215]: E0714 22:43:34.005634 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005776 kubelet[2215]: E0714 22:43:34.005741 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005776 kubelet[2215]: W0714 22:43:34.005747 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005845 kubelet[2215]: E0714 22:43:34.005815 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.005900 kubelet[2215]: E0714 22:43:34.005885 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.005900 kubelet[2215]: W0714 22:43:34.005894 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.005987 kubelet[2215]: E0714 22:43:34.005955 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.006064 kubelet[2215]: E0714 22:43:34.006050 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.006064 kubelet[2215]: W0714 22:43:34.006059 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.006133 kubelet[2215]: E0714 22:43:34.006068 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.006375 kubelet[2215]: E0714 22:43:34.006350 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.006375 kubelet[2215]: W0714 22:43:34.006362 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.006375 kubelet[2215]: E0714 22:43:34.006371 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.006541 kubelet[2215]: E0714 22:43:34.006529 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.006541 kubelet[2215]: W0714 22:43:34.006539 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.006594 kubelet[2215]: E0714 22:43:34.006547 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.006739 kubelet[2215]: E0714 22:43:34.006726 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.006739 kubelet[2215]: W0714 22:43:34.006736 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.006817 kubelet[2215]: E0714 22:43:34.006745 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.006912 kubelet[2215]: E0714 22:43:34.006899 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.006912 kubelet[2215]: W0714 22:43:34.006909 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.006969 kubelet[2215]: E0714 22:43:34.006918 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007103 kubelet[2215]: E0714 22:43:34.007087 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007103 kubelet[2215]: W0714 22:43:34.007097 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.007103 kubelet[2215]: E0714 22:43:34.007104 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007301 kubelet[2215]: E0714 22:43:34.007285 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007301 kubelet[2215]: W0714 22:43:34.007298 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.007358 kubelet[2215]: E0714 22:43:34.007310 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007468 kubelet[2215]: E0714 22:43:34.007455 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007468 kubelet[2215]: W0714 22:43:34.007465 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.007517 kubelet[2215]: E0714 22:43:34.007472 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007632 kubelet[2215]: E0714 22:43:34.007617 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007632 kubelet[2215]: W0714 22:43:34.007627 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.007707 kubelet[2215]: E0714 22:43:34.007694 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007782 kubelet[2215]: E0714 22:43:34.007769 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007782 kubelet[2215]: W0714 22:43:34.007779 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.007836 kubelet[2215]: E0714 22:43:34.007786 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.007946 kubelet[2215]: E0714 22:43:34.007935 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.007946 kubelet[2215]: W0714 22:43:34.007943 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.008001 kubelet[2215]: E0714 22:43:34.007950 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.008155 kubelet[2215]: E0714 22:43:34.008140 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.008155 kubelet[2215]: W0714 22:43:34.008150 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.008155 kubelet[2215]: E0714 22:43:34.008157 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.012254 kubelet[2215]: E0714 22:43:34.012229 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.012254 kubelet[2215]: W0714 22:43:34.012242 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.012254 kubelet[2215]: E0714 22:43:34.012250 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.028707 kubelet[2215]: E0714 22:43:34.028681 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:34.028707 kubelet[2215]: W0714 22:43:34.028697 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:34.028707 kubelet[2215]: E0714 22:43:34.028711 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:34.495000 audit[2738]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:34.495000 audit[2738]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffccc9eccc0 a2=0 a3=7ffccc9eccac items=0 ppid=2323 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:34.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:34.504000 audit[2738]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:34.504000 audit[2738]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccc9eccc0 a2=0 a3=0 items=0 ppid=2323 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:34.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:35.560170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338033322.mount: Deactivated successfully. Jul 14 22:43:35.649052 kubelet[2215]: E0714 22:43:35.649008 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:36.976091 env[1320]: time="2025-07-14T22:43:36.976023738Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:37.039389 env[1320]: time="2025-07-14T22:43:37.039342254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:37.062861 env[1320]: time="2025-07-14T22:43:37.062804960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:37.096598 env[1320]: time="2025-07-14T22:43:37.096534550Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:37.097209 env[1320]: time="2025-07-14T22:43:37.097159739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:43:37.098413 env[1320]: time="2025-07-14T22:43:37.098373987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:43:37.117570 env[1320]: time="2025-07-14T22:43:37.117529476Z" level=info msg="CreateContainer within sandbox \"ce1e1ec0451e3d00ec9135092b7f1f547d00f0d631f8dd10f49709ef7180f99e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:43:37.417095 env[1320]: time="2025-07-14T22:43:37.417026898Z" level=info msg="CreateContainer within sandbox \"ce1e1ec0451e3d00ec9135092b7f1f547d00f0d631f8dd10f49709ef7180f99e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"add6104386ced045bce27dfde974182c30c1bd8473037532f0c34b14351cb3af\"" Jul 14 22:43:37.420994 env[1320]: time="2025-07-14T22:43:37.418431347Z" level=info msg="StartContainer for \"add6104386ced045bce27dfde974182c30c1bd8473037532f0c34b14351cb3af\"" Jul 14 22:43:37.649172 kubelet[2215]: E0714 22:43:37.649109 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:37.946231 env[1320]: time="2025-07-14T22:43:37.945770182Z" level=info msg="StartContainer for \"add6104386ced045bce27dfde974182c30c1bd8473037532f0c34b14351cb3af\" returns successfully" Jul 14 22:43:38.949888 kubelet[2215]: E0714 22:43:38.949851 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:39.041428 kubelet[2215]: E0714 22:43:39.041391 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.041428 kubelet[2215]: W0714 22:43:39.041421 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.041649 kubelet[2215]: E0714 22:43:39.041448 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.041688 kubelet[2215]: E0714 22:43:39.041649 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.041688 kubelet[2215]: W0714 22:43:39.041658 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.041688 kubelet[2215]: E0714 22:43:39.041667 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.041835 kubelet[2215]: E0714 22:43:39.041823 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.041835 kubelet[2215]: W0714 22:43:39.041833 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.041898 kubelet[2215]: E0714 22:43:39.041842 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042042 kubelet[2215]: E0714 22:43:39.042032 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042083 kubelet[2215]: W0714 22:43:39.042042 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.042083 kubelet[2215]: E0714 22:43:39.042051 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042219 kubelet[2215]: E0714 22:43:39.042207 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042219 kubelet[2215]: W0714 22:43:39.042217 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.042295 kubelet[2215]: E0714 22:43:39.042226 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042423 kubelet[2215]: E0714 22:43:39.042409 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042423 kubelet[2215]: W0714 22:43:39.042419 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.042510 kubelet[2215]: E0714 22:43:39.042428 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042603 kubelet[2215]: E0714 22:43:39.042592 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042632 kubelet[2215]: W0714 22:43:39.042602 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.042632 kubelet[2215]: E0714 22:43:39.042611 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042772 kubelet[2215]: E0714 22:43:39.042759 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042772 kubelet[2215]: W0714 22:43:39.042768 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.042858 kubelet[2215]: E0714 22:43:39.042779 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.042946 kubelet[2215]: E0714 22:43:39.042935 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.042946 kubelet[2215]: W0714 22:43:39.042944 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043030 kubelet[2215]: E0714 22:43:39.042953 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043113 kubelet[2215]: E0714 22:43:39.043103 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.043149 kubelet[2215]: W0714 22:43:39.043112 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043149 kubelet[2215]: E0714 22:43:39.043121 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043274 kubelet[2215]: E0714 22:43:39.043263 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.043315 kubelet[2215]: W0714 22:43:39.043274 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043315 kubelet[2215]: E0714 22:43:39.043282 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043438 kubelet[2215]: E0714 22:43:39.043428 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.043477 kubelet[2215]: W0714 22:43:39.043438 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043477 kubelet[2215]: E0714 22:43:39.043447 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043604 kubelet[2215]: E0714 22:43:39.043591 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.043604 kubelet[2215]: W0714 22:43:39.043600 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043689 kubelet[2215]: E0714 22:43:39.043609 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043785 kubelet[2215]: E0714 22:43:39.043772 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.043785 kubelet[2215]: W0714 22:43:39.043781 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.043869 kubelet[2215]: E0714 22:43:39.043792 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.043976 kubelet[2215]: E0714 22:43:39.043947 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.044036 kubelet[2215]: W0714 22:43:39.043974 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.044036 kubelet[2215]: E0714 22:43:39.043992 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.142236 kubelet[2215]: E0714 22:43:39.142198 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.142236 kubelet[2215]: W0714 22:43:39.142228 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.142463 kubelet[2215]: E0714 22:43:39.142253 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.142500 kubelet[2215]: E0714 22:43:39.142483 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.142500 kubelet[2215]: W0714 22:43:39.142492 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.142559 kubelet[2215]: E0714 22:43:39.142511 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.142901 kubelet[2215]: E0714 22:43:39.142862 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.142992 kubelet[2215]: W0714 22:43:39.142899 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.142992 kubelet[2215]: E0714 22:43:39.142929 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.143168 kubelet[2215]: E0714 22:43:39.143151 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.143168 kubelet[2215]: W0714 22:43:39.143162 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.143255 kubelet[2215]: E0714 22:43:39.143176 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.143350 kubelet[2215]: E0714 22:43:39.143337 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.143350 kubelet[2215]: W0714 22:43:39.143346 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.143420 kubelet[2215]: E0714 22:43:39.143357 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.143595 kubelet[2215]: E0714 22:43:39.143578 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.143595 kubelet[2215]: W0714 22:43:39.143590 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.143683 kubelet[2215]: E0714 22:43:39.143603 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.143922 kubelet[2215]: E0714 22:43:39.143888 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.143922 kubelet[2215]: W0714 22:43:39.143907 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.144170 kubelet[2215]: E0714 22:43:39.143941 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.144170 kubelet[2215]: E0714 22:43:39.144165 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.144254 kubelet[2215]: W0714 22:43:39.144181 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.144254 kubelet[2215]: E0714 22:43:39.144200 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.144395 kubelet[2215]: E0714 22:43:39.144377 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.144395 kubelet[2215]: W0714 22:43:39.144391 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.144507 kubelet[2215]: E0714 22:43:39.144406 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.144594 kubelet[2215]: E0714 22:43:39.144580 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.144624 kubelet[2215]: W0714 22:43:39.144601 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.144624 kubelet[2215]: E0714 22:43:39.144615 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.144863 kubelet[2215]: E0714 22:43:39.144846 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.144863 kubelet[2215]: W0714 22:43:39.144859 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.144995 kubelet[2215]: E0714 22:43:39.144873 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.145158 kubelet[2215]: E0714 22:43:39.145134 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.145158 kubelet[2215]: W0714 22:43:39.145149 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.145288 kubelet[2215]: E0714 22:43:39.145166 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.145374 kubelet[2215]: E0714 22:43:39.145360 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.145374 kubelet[2215]: W0714 22:43:39.145369 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.145470 kubelet[2215]: E0714 22:43:39.145382 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.145625 kubelet[2215]: E0714 22:43:39.145595 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.145625 kubelet[2215]: W0714 22:43:39.145618 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.145711 kubelet[2215]: E0714 22:43:39.145639 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.145816 kubelet[2215]: E0714 22:43:39.145802 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.145816 kubelet[2215]: W0714 22:43:39.145813 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.145870 kubelet[2215]: E0714 22:43:39.145825 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.146018 kubelet[2215]: E0714 22:43:39.146006 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.146018 kubelet[2215]: W0714 22:43:39.146017 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.146072 kubelet[2215]: E0714 22:43:39.146030 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.146295 kubelet[2215]: E0714 22:43:39.146279 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.146295 kubelet[2215]: W0714 22:43:39.146292 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.146379 kubelet[2215]: E0714 22:43:39.146307 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.146482 kubelet[2215]: E0714 22:43:39.146468 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:39.146482 kubelet[2215]: W0714 22:43:39.146479 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:39.146530 kubelet[2215]: E0714 22:43:39.146489 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:39.172310 kubelet[2215]: I0714 22:43:39.172241 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-649999b96f-ggqgh" podStartSLOduration=3.02295636 podStartE2EDuration="6.172222176s" podCreationTimestamp="2025-07-14 22:43:33 +0000 UTC" firstStartedPulling="2025-07-14 22:43:33.948894945 +0000 UTC m=+26.390237088" lastFinishedPulling="2025-07-14 22:43:37.098160771 +0000 UTC m=+29.539502904" observedRunningTime="2025-07-14 22:43:39.171805474 +0000 UTC m=+31.613147648" watchObservedRunningTime="2025-07-14 22:43:39.172222176 +0000 UTC m=+31.613564349" Jul 14 22:43:39.649655 kubelet[2215]: E0714 22:43:39.649333 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:39.688824 env[1320]: time="2025-07-14T22:43:39.688761667Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:39.700803 env[1320]: time="2025-07-14T22:43:39.700750493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:39.706886 env[1320]: time="2025-07-14T22:43:39.706824070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:39.711457 env[1320]: time="2025-07-14T22:43:39.711407156Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:39.711908 env[1320]: time="2025-07-14T22:43:39.711865988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:43:39.714147 env[1320]: time="2025-07-14T22:43:39.714114479Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:43:39.951211 kubelet[2215]: I0714 22:43:39.951104 2215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:43:39.951579 kubelet[2215]: E0714 22:43:39.951425 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:40.051483 kubelet[2215]: E0714 22:43:40.051433 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.051483 kubelet[2215]: W0714 22:43:40.051461 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.051483 kubelet[2215]: E0714 22:43:40.051488 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.051735 kubelet[2215]: E0714 22:43:40.051685 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.051735 kubelet[2215]: W0714 22:43:40.051695 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.051735 kubelet[2215]: E0714 22:43:40.051706 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.051909 kubelet[2215]: E0714 22:43:40.051890 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.051909 kubelet[2215]: W0714 22:43:40.051904 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052022 kubelet[2215]: E0714 22:43:40.051915 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.052118 kubelet[2215]: E0714 22:43:40.052100 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.052118 kubelet[2215]: W0714 22:43:40.052113 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052200 kubelet[2215]: E0714 22:43:40.052123 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.052326 kubelet[2215]: E0714 22:43:40.052308 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.052326 kubelet[2215]: W0714 22:43:40.052321 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052416 kubelet[2215]: E0714 22:43:40.052331 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.052508 kubelet[2215]: E0714 22:43:40.052490 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.052508 kubelet[2215]: W0714 22:43:40.052503 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052584 kubelet[2215]: E0714 22:43:40.052513 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.052682 kubelet[2215]: E0714 22:43:40.052664 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.052682 kubelet[2215]: W0714 22:43:40.052676 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052774 kubelet[2215]: E0714 22:43:40.052686 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.052872 kubelet[2215]: E0714 22:43:40.052855 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.052872 kubelet[2215]: W0714 22:43:40.052867 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.052951 kubelet[2215]: E0714 22:43:40.052877 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.053103 kubelet[2215]: E0714 22:43:40.053086 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.053103 kubelet[2215]: W0714 22:43:40.053098 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.053181 kubelet[2215]: E0714 22:43:40.053108 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.053287 kubelet[2215]: E0714 22:43:40.053270 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.053287 kubelet[2215]: W0714 22:43:40.053282 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.053365 kubelet[2215]: E0714 22:43:40.053292 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.053462 kubelet[2215]: E0714 22:43:40.053445 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.053462 kubelet[2215]: W0714 22:43:40.053456 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.053548 kubelet[2215]: E0714 22:43:40.053466 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.053656 kubelet[2215]: E0714 22:43:40.053640 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.053656 kubelet[2215]: W0714 22:43:40.053652 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.053738 kubelet[2215]: E0714 22:43:40.053661 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.053951 kubelet[2215]: E0714 22:43:40.053923 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.053951 kubelet[2215]: W0714 22:43:40.053949 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.054064 kubelet[2215]: E0714 22:43:40.053986 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.054220 kubelet[2215]: E0714 22:43:40.054207 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.054220 kubelet[2215]: W0714 22:43:40.054215 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.054220 kubelet[2215]: E0714 22:43:40.054222 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.054386 kubelet[2215]: E0714 22:43:40.054376 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.054418 kubelet[2215]: W0714 22:43:40.054386 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.054418 kubelet[2215]: E0714 22:43:40.054396 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.054632 kubelet[2215]: E0714 22:43:40.054619 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.054632 kubelet[2215]: W0714 22:43:40.054627 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.054632 kubelet[2215]: E0714 22:43:40.054634 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.054836 kubelet[2215]: E0714 22:43:40.054824 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.054836 kubelet[2215]: W0714 22:43:40.054833 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.054901 kubelet[2215]: E0714 22:43:40.054843 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.055102 kubelet[2215]: E0714 22:43:40.055081 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.055102 kubelet[2215]: W0714 22:43:40.055096 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.055179 kubelet[2215]: E0714 22:43:40.055116 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.055326 kubelet[2215]: E0714 22:43:40.055308 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.055326 kubelet[2215]: W0714 22:43:40.055320 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.055410 kubelet[2215]: E0714 22:43:40.055333 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.055505 kubelet[2215]: E0714 22:43:40.055491 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.055505 kubelet[2215]: W0714 22:43:40.055500 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.055575 kubelet[2215]: E0714 22:43:40.055511 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.055734 kubelet[2215]: E0714 22:43:40.055709 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.055734 kubelet[2215]: W0714 22:43:40.055730 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.055815 kubelet[2215]: E0714 22:43:40.055745 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056001 kubelet[2215]: E0714 22:43:40.055987 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056044 kubelet[2215]: W0714 22:43:40.056003 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056044 kubelet[2215]: E0714 22:43:40.056017 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056182 kubelet[2215]: E0714 22:43:40.056166 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056182 kubelet[2215]: W0714 22:43:40.056177 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056261 kubelet[2215]: E0714 22:43:40.056188 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056324 kubelet[2215]: E0714 22:43:40.056311 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056324 kubelet[2215]: W0714 22:43:40.056319 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056383 kubelet[2215]: E0714 22:43:40.056329 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056468 kubelet[2215]: E0714 22:43:40.056454 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056468 kubelet[2215]: W0714 22:43:40.056463 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056539 kubelet[2215]: E0714 22:43:40.056472 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056637 kubelet[2215]: E0714 22:43:40.056623 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056637 kubelet[2215]: W0714 22:43:40.056633 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056708 kubelet[2215]: E0714 22:43:40.056647 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.056836 kubelet[2215]: E0714 22:43:40.056822 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.056836 kubelet[2215]: W0714 22:43:40.056833 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.056907 kubelet[2215]: E0714 22:43:40.056845 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.057043 kubelet[2215]: E0714 22:43:40.057031 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.057043 kubelet[2215]: W0714 22:43:40.057041 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.057122 kubelet[2215]: E0714 22:43:40.057054 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.057219 kubelet[2215]: E0714 22:43:40.057203 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.057219 kubelet[2215]: W0714 22:43:40.057215 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.057287 kubelet[2215]: E0714 22:43:40.057227 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.057428 kubelet[2215]: E0714 22:43:40.057415 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.057428 kubelet[2215]: W0714 22:43:40.057426 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.057478 kubelet[2215]: E0714 22:43:40.057434 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.057575 kubelet[2215]: E0714 22:43:40.057563 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.057575 kubelet[2215]: W0714 22:43:40.057572 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.057636 kubelet[2215]: E0714 22:43:40.057579 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.057754 kubelet[2215]: E0714 22:43:40.057740 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.057754 kubelet[2215]: W0714 22:43:40.057751 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.057811 kubelet[2215]: E0714 22:43:40.057760 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.060945 kubelet[2215]: E0714 22:43:40.060913 2215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:43:40.060945 kubelet[2215]: W0714 22:43:40.060939 2215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:43:40.061074 kubelet[2215]: E0714 22:43:40.060975 2215 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:43:40.070658 env[1320]: time="2025-07-14T22:43:40.070587035Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65\"" Jul 14 22:43:40.071426 env[1320]: time="2025-07-14T22:43:40.071368499Z" level=info msg="StartContainer for \"cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65\"" Jul 14 22:43:40.161506 env[1320]: time="2025-07-14T22:43:40.161427642Z" level=info msg="StartContainer for \"cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65\" returns successfully" Jul 14 22:43:40.214537 env[1320]: time="2025-07-14T22:43:40.214400315Z" level=info msg="shim disconnected" id=cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65 Jul 14 22:43:40.214537 env[1320]: time="2025-07-14T22:43:40.214460780Z" level=warning msg="cleaning up after shim disconnected" id=cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65 namespace=k8s.io Jul 14 22:43:40.214537 env[1320]: time="2025-07-14T22:43:40.214471630Z" level=info msg="cleaning up dead shim" Jul 14 22:43:40.221509 env[1320]: time="2025-07-14T22:43:40.221466744Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:43:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2897 runtime=io.containerd.runc.v2\n" Jul 14 22:43:40.743353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd01d3a4d5297d84c089bf1269ba7dc312425c6e40497a02ce0ecc229ec9c65-rootfs.mount: Deactivated successfully. Jul 14 22:43:40.954513 kubelet[2215]: I0714 22:43:40.954461 2215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:43:40.954905 kubelet[2215]: E0714 22:43:40.954799 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:40.956159 env[1320]: time="2025-07-14T22:43:40.956113766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:43:41.648713 kubelet[2215]: E0714 22:43:41.648665 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:41.666000 audit[2920]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:41.668091 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 14 22:43:41.668144 kernel: audit: type=1325 audit(1752533021.666:317): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:41.666000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc19acc4c0 a2=0 a3=7ffc19acc4ac items=0 ppid=2323 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:41.675267 kernel: audit: type=1300 audit(1752533021.666:317): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc19acc4c0 a2=0 a3=7ffc19acc4ac items=0 ppid=2323 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:41.675309 kernel: audit: type=1327 audit(1752533021.666:317): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:41.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:41.678000 audit[2920]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:41.678000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc19acc4c0 a2=0 a3=7ffc19acc4ac items=0 ppid=2323 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:41.687041 kernel: audit: type=1325 audit(1752533021.678:318): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:43:41.687104 kernel: audit: type=1300 audit(1752533021.678:318): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc19acc4c0 a2=0 a3=7ffc19acc4ac items=0 ppid=2323 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:43:41.687132 kernel: audit: type=1327 audit(1752533021.678:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:41.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:43:41.957204 kubelet[2215]: E0714 22:43:41.957086 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:44.001028 kubelet[2215]: E0714 22:43:44.000956 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:45.649748 kubelet[2215]: E0714 22:43:45.649684 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:46.753987 env[1320]: time="2025-07-14T22:43:46.753910832Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:46.864517 env[1320]: time="2025-07-14T22:43:46.864460535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:46.897201 env[1320]: time="2025-07-14T22:43:46.897144406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:46.929155 env[1320]: time="2025-07-14T22:43:46.929093423Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:43:46.929919 env[1320]: time="2025-07-14T22:43:46.929880827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:43:46.932396 env[1320]: time="2025-07-14T22:43:46.932344369Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:43:47.541147 env[1320]: time="2025-07-14T22:43:47.541077531Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a\"" Jul 14 22:43:47.541661 env[1320]: time="2025-07-14T22:43:47.541625590Z" level=info msg="StartContainer for \"33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a\"" Jul 14 22:43:47.649258 kubelet[2215]: E0714 22:43:47.649216 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:48.803574 env[1320]: time="2025-07-14T22:43:48.803518549Z" level=info msg="StartContainer for \"33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a\" returns successfully" Jul 14 22:43:48.804042 kubelet[2215]: E0714 22:43:48.803781 2215 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.155s" Jul 14 22:43:49.649260 kubelet[2215]: E0714 22:43:49.649200 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:51.649080 kubelet[2215]: E0714 22:43:51.649020 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:51.862072 env[1320]: time="2025-07-14T22:43:51.862009876Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:43:51.881002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a-rootfs.mount: Deactivated successfully. Jul 14 22:43:51.973370 kubelet[2215]: I0714 22:43:51.973268 2215 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:43:51.996056 env[1320]: time="2025-07-14T22:43:51.996006444Z" level=info msg="shim disconnected" id=33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a Jul 14 22:43:51.996056 env[1320]: time="2025-07-14T22:43:51.996052301Z" level=warning msg="cleaning up after shim disconnected" id=33aec96a724e1b10943bfebdcf436f5b1d87ae6ab2ba0bc8ff8db5e7c137ab1a namespace=k8s.io Jul 14 22:43:51.996056 env[1320]: time="2025-07-14T22:43:51.996062941Z" level=info msg="cleaning up dead shim" Jul 14 22:43:52.002148 env[1320]: time="2025-07-14T22:43:52.002112942Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2973 runtime=io.containerd.runc.v2\n" Jul 14 22:43:52.337488 kubelet[2215]: I0714 22:43:52.337428 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2qx\" (UniqueName: \"kubernetes.io/projected/573e0651-d8b7-4359-8549-45a022613024-kube-api-access-sr2qx\") pod \"coredns-7c65d6cfc9-scf5h\" (UID: \"573e0651-d8b7-4359-8549-45a022613024\") " pod="kube-system/coredns-7c65d6cfc9-scf5h" Jul 14 22:43:52.337488 kubelet[2215]: I0714 22:43:52.337465 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573e0651-d8b7-4359-8549-45a022613024-config-volume\") pod \"coredns-7c65d6cfc9-scf5h\" (UID: \"573e0651-d8b7-4359-8549-45a022613024\") " pod="kube-system/coredns-7c65d6cfc9-scf5h" Jul 14 22:43:52.438628 kubelet[2215]: I0714 22:43:52.438557 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csk4n\" (UniqueName: \"kubernetes.io/projected/28f9fef2-ff3a-4233-92f1-c94976e9b138-kube-api-access-csk4n\") pod \"calico-apiserver-5f66f5ffdc-7hdt4\" (UID: \"28f9fef2-ff3a-4233-92f1-c94976e9b138\") " pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" Jul 14 22:43:52.438628 kubelet[2215]: I0714 22:43:52.438613 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7tk\" (UniqueName: \"kubernetes.io/projected/63689819-3628-4d96-bf6f-7f8f144f2164-kube-api-access-qq7tk\") pod \"goldmane-58fd7646b9-q9tk2\" (UID: \"63689819-3628-4d96-bf6f-7f8f144f2164\") " pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:52.438628 kubelet[2215]: I0714 22:43:52.438628 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgfz4\" (UniqueName: \"kubernetes.io/projected/44d35327-7f5c-4584-8b0a-dbf8a90adea6-kube-api-access-lgfz4\") pod \"coredns-7c65d6cfc9-j6xgm\" (UID: \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\") " pod="kube-system/coredns-7c65d6cfc9-j6xgm" Jul 14 22:43:52.438879 kubelet[2215]: I0714 22:43:52.438668 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ececfe63-8e48-4463-becc-747d3684a68e-whisker-ca-bundle\") pod \"whisker-844f5b784b-xzh64\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " pod="calico-system/whisker-844f5b784b-xzh64" Jul 14 22:43:52.438879 kubelet[2215]: I0714 22:43:52.438737 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63689819-3628-4d96-bf6f-7f8f144f2164-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-q9tk2\" (UID: \"63689819-3628-4d96-bf6f-7f8f144f2164\") " pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:52.438879 kubelet[2215]: I0714 22:43:52.438778 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/63689819-3628-4d96-bf6f-7f8f144f2164-goldmane-key-pair\") pod \"goldmane-58fd7646b9-q9tk2\" (UID: \"63689819-3628-4d96-bf6f-7f8f144f2164\") " pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:52.438879 kubelet[2215]: I0714 22:43:52.438799 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hb4\" (UniqueName: \"kubernetes.io/projected/ececfe63-8e48-4463-becc-747d3684a68e-kube-api-access-52hb4\") pod \"whisker-844f5b784b-xzh64\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " pod="calico-system/whisker-844f5b784b-xzh64" Jul 14 22:43:52.438879 kubelet[2215]: I0714 22:43:52.438835 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwf5v\" (UniqueName: \"kubernetes.io/projected/e904006e-54c2-458a-afd4-0856ab783ed3-kube-api-access-pwf5v\") pod \"calico-apiserver-5f66f5ffdc-799wc\" (UID: \"e904006e-54c2-458a-afd4-0856ab783ed3\") " pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" Jul 14 22:43:52.439106 kubelet[2215]: I0714 22:43:52.438864 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28f9fef2-ff3a-4233-92f1-c94976e9b138-calico-apiserver-certs\") pod \"calico-apiserver-5f66f5ffdc-7hdt4\" (UID: \"28f9fef2-ff3a-4233-92f1-c94976e9b138\") " pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" Jul 14 22:43:52.439106 kubelet[2215]: I0714 22:43:52.438890 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44d35327-7f5c-4584-8b0a-dbf8a90adea6-config-volume\") pod \"coredns-7c65d6cfc9-j6xgm\" (UID: \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\") " pod="kube-system/coredns-7c65d6cfc9-j6xgm" Jul 14 22:43:52.439106 kubelet[2215]: I0714 22:43:52.438917 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63689819-3628-4d96-bf6f-7f8f144f2164-config\") pod \"goldmane-58fd7646b9-q9tk2\" (UID: \"63689819-3628-4d96-bf6f-7f8f144f2164\") " pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:52.439106 kubelet[2215]: I0714 22:43:52.438936 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ececfe63-8e48-4463-becc-747d3684a68e-whisker-backend-key-pair\") pod \"whisker-844f5b784b-xzh64\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " pod="calico-system/whisker-844f5b784b-xzh64" Jul 14 22:43:52.439106 kubelet[2215]: I0714 22:43:52.438978 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e904006e-54c2-458a-afd4-0856ab783ed3-calico-apiserver-certs\") pod \"calico-apiserver-5f66f5ffdc-799wc\" (UID: \"e904006e-54c2-458a-afd4-0856ab783ed3\") " pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" Jul 14 22:43:52.439283 kubelet[2215]: I0714 22:43:52.439012 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb90fe31-1c87-48f5-81dd-a9f3638c4eaf-tigera-ca-bundle\") pod \"calico-kube-controllers-87ddffd96-qc6h6\" (UID: \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\") " pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" Jul 14 22:43:52.439283 kubelet[2215]: I0714 22:43:52.439038 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhg2z\" (UniqueName: \"kubernetes.io/projected/cb90fe31-1c87-48f5-81dd-a9f3638c4eaf-kube-api-access-bhg2z\") pod \"calico-kube-controllers-87ddffd96-qc6h6\" (UID: \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\") " pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" Jul 14 22:43:52.639648 env[1320]: time="2025-07-14T22:43:52.639545598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87ddffd96-qc6h6,Uid:cb90fe31-1c87-48f5-81dd-a9f3638c4eaf,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:52.642757 env[1320]: time="2025-07-14T22:43:52.642628126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844f5b784b-xzh64,Uid:ececfe63-8e48-4463-becc-747d3684a68e,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:52.645199 kubelet[2215]: E0714 22:43:52.645177 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:52.645331 env[1320]: time="2025-07-14T22:43:52.645297001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-799wc,Uid:e904006e-54c2-458a-afd4-0856ab783ed3,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:43:52.645642 env[1320]: time="2025-07-14T22:43:52.645508592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j6xgm,Uid:44d35327-7f5c-4584-8b0a-dbf8a90adea6,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:52.647083 env[1320]: time="2025-07-14T22:43:52.647053909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-7hdt4,Uid:28f9fef2-ff3a-4233-92f1-c94976e9b138,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:43:52.795527 kubelet[2215]: E0714 22:43:52.795494 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:52.796024 env[1320]: time="2025-07-14T22:43:52.795984035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-scf5h,Uid:573e0651-d8b7-4359-8549-45a022613024,Namespace:kube-system,Attempt:0,}" Jul 14 22:43:52.815503 env[1320]: time="2025-07-14T22:43:52.815472468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:43:52.936136 env[1320]: time="2025-07-14T22:43:52.936001471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q9tk2,Uid:63689819-3628-4d96-bf6f-7f8f144f2164,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:53.651218 env[1320]: time="2025-07-14T22:43:53.651168841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx29x,Uid:26698740-5794-455a-b832-1e56047f0f19,Namespace:calico-system,Attempt:0,}" Jul 14 22:43:58.571839 env[1320]: time="2025-07-14T22:43:58.571744364Z" level=error msg="Failed to destroy network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.573845 env[1320]: time="2025-07-14T22:43:58.573798361Z" level=error msg="Failed to destroy network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.574233 env[1320]: time="2025-07-14T22:43:58.574176476Z" level=error msg="encountered an error cleaning up failed sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.574360 env[1320]: time="2025-07-14T22:43:58.574320889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87ddffd96-qc6h6,Uid:cb90fe31-1c87-48f5-81dd-a9f3638c4eaf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.574674 kubelet[2215]: E0714 22:43:58.574615 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.575285 kubelet[2215]: E0714 22:43:58.574705 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" Jul 14 22:43:58.575285 kubelet[2215]: E0714 22:43:58.574732 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" Jul 14 22:43:58.575285 kubelet[2215]: E0714 22:43:58.574792 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-87ddffd96-qc6h6_calico-system(cb90fe31-1c87-48f5-81dd-a9f3638c4eaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-87ddffd96-qc6h6_calico-system(cb90fe31-1c87-48f5-81dd-a9f3638c4eaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" podUID="cb90fe31-1c87-48f5-81dd-a9f3638c4eaf" Jul 14 22:43:58.574954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4-shm.mount: Deactivated successfully. Jul 14 22:43:58.578378 env[1320]: time="2025-07-14T22:43:58.578311591Z" level=error msg="encountered an error cleaning up failed sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.578600 env[1320]: time="2025-07-14T22:43:58.578561594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-7hdt4,Uid:28f9fef2-ff3a-4233-92f1-c94976e9b138,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.579163 kubelet[2215]: E0714 22:43:58.578922 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.579163 kubelet[2215]: E0714 22:43:58.579012 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" Jul 14 22:43:58.579163 kubelet[2215]: E0714 22:43:58.579039 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" Jul 14 22:43:58.579324 kubelet[2215]: E0714 22:43:58.579089 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f66f5ffdc-7hdt4_calico-apiserver(28f9fef2-ff3a-4233-92f1-c94976e9b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f66f5ffdc-7hdt4_calico-apiserver(28f9fef2-ff3a-4233-92f1-c94976e9b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" podUID="28f9fef2-ff3a-4233-92f1-c94976e9b138" Jul 14 22:43:58.579557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803-shm.mount: Deactivated successfully. Jul 14 22:43:58.619292 env[1320]: time="2025-07-14T22:43:58.619226189Z" level=error msg="Failed to destroy network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.619636 env[1320]: time="2025-07-14T22:43:58.619607781Z" level=error msg="encountered an error cleaning up failed sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.619717 env[1320]: time="2025-07-14T22:43:58.619649821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-799wc,Uid:e904006e-54c2-458a-afd4-0856ab783ed3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.619911 kubelet[2215]: E0714 22:43:58.619868 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.620011 kubelet[2215]: E0714 22:43:58.619934 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" Jul 14 22:43:58.620011 kubelet[2215]: E0714 22:43:58.619976 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" Jul 14 22:43:58.620097 kubelet[2215]: E0714 22:43:58.620032 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f66f5ffdc-799wc_calico-apiserver(e904006e-54c2-458a-afd4-0856ab783ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f66f5ffdc-799wc_calico-apiserver(e904006e-54c2-458a-afd4-0856ab783ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" podUID="e904006e-54c2-458a-afd4-0856ab783ed3" Jul 14 22:43:58.621592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958-shm.mount: Deactivated successfully. Jul 14 22:43:58.666502 env[1320]: time="2025-07-14T22:43:58.666418858Z" level=error msg="Failed to destroy network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.666844 env[1320]: time="2025-07-14T22:43:58.666805629Z" level=error msg="encountered an error cleaning up failed sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.666891 env[1320]: time="2025-07-14T22:43:58.666864079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-scf5h,Uid:573e0651-d8b7-4359-8549-45a022613024,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.667153 kubelet[2215]: E0714 22:43:58.667115 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.667218 kubelet[2215]: E0714 22:43:58.667177 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-scf5h" Jul 14 22:43:58.667218 kubelet[2215]: E0714 22:43:58.667204 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-scf5h" Jul 14 22:43:58.667284 kubelet[2215]: E0714 22:43:58.667256 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-scf5h_kube-system(573e0651-d8b7-4359-8549-45a022613024)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-scf5h_kube-system(573e0651-d8b7-4359-8549-45a022613024)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-scf5h" podUID="573e0651-d8b7-4359-8549-45a022613024" Jul 14 22:43:58.826041 kubelet[2215]: I0714 22:43:58.825221 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:43:58.826386 env[1320]: time="2025-07-14T22:43:58.826351031Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:43:58.828029 kubelet[2215]: I0714 22:43:58.827999 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:43:58.828696 env[1320]: time="2025-07-14T22:43:58.828657837Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:43:58.833364 kubelet[2215]: I0714 22:43:58.833329 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:43:58.833873 env[1320]: time="2025-07-14T22:43:58.833837507Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:43:58.835158 kubelet[2215]: I0714 22:43:58.835130 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:43:58.835557 env[1320]: time="2025-07-14T22:43:58.835512597Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:43:58.863595 env[1320]: time="2025-07-14T22:43:58.863527186Z" level=error msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" failed" error="failed to destroy network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.864071 kubelet[2215]: E0714 22:43:58.863794 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:43:58.864071 kubelet[2215]: E0714 22:43:58.863873 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958"} Jul 14 22:43:58.864071 kubelet[2215]: E0714 22:43:58.863976 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e904006e-54c2-458a-afd4-0856ab783ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:43:58.864071 kubelet[2215]: E0714 22:43:58.864012 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e904006e-54c2-458a-afd4-0856ab783ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" podUID="e904006e-54c2-458a-afd4-0856ab783ed3" Jul 14 22:43:58.870137 env[1320]: time="2025-07-14T22:43:58.870064456Z" level=error msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" failed" error="failed to destroy network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.870381 kubelet[2215]: E0714 22:43:58.870323 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:43:58.870468 kubelet[2215]: E0714 22:43:58.870390 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4"} Jul 14 22:43:58.870468 kubelet[2215]: E0714 22:43:58.870435 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28f9fef2-ff3a-4233-92f1-c94976e9b138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:43:58.870468 kubelet[2215]: E0714 22:43:58.870460 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28f9fef2-ff3a-4233-92f1-c94976e9b138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" podUID="28f9fef2-ff3a-4233-92f1-c94976e9b138" Jul 14 22:43:58.874546 env[1320]: time="2025-07-14T22:43:58.874478749Z" level=error msg="Failed to destroy network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.875114 env[1320]: time="2025-07-14T22:43:58.875082491Z" level=error msg="encountered an error cleaning up failed sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.875242 env[1320]: time="2025-07-14T22:43:58.875208570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j6xgm,Uid:44d35327-7f5c-4584-8b0a-dbf8a90adea6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.875498 kubelet[2215]: E0714 22:43:58.875448 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.875566 kubelet[2215]: E0714 22:43:58.875512 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j6xgm" Jul 14 22:43:58.875566 kubelet[2215]: E0714 22:43:58.875527 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j6xgm" Jul 14 22:43:58.875647 kubelet[2215]: E0714 22:43:58.875573 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j6xgm_kube-system(44d35327-7f5c-4584-8b0a-dbf8a90adea6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j6xgm_kube-system(44d35327-7f5c-4584-8b0a-dbf8a90adea6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j6xgm" podUID="44d35327-7f5c-4584-8b0a-dbf8a90adea6" Jul 14 22:43:58.889377 env[1320]: time="2025-07-14T22:43:58.889305735Z" level=error msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" failed" error="failed to destroy network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.889650 kubelet[2215]: E0714 22:43:58.889591 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:43:58.889727 kubelet[2215]: E0714 22:43:58.889659 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803"} Jul 14 22:43:58.889727 kubelet[2215]: E0714 22:43:58.889692 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:43:58.889727 kubelet[2215]: E0714 22:43:58.889709 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" podUID="cb90fe31-1c87-48f5-81dd-a9f3638c4eaf" Jul 14 22:43:58.890427 env[1320]: time="2025-07-14T22:43:58.890379395Z" level=error msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" failed" error="failed to destroy network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:58.890651 kubelet[2215]: E0714 22:43:58.890625 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:43:58.890651 kubelet[2215]: E0714 22:43:58.890650 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897"} Jul 14 22:43:58.890769 kubelet[2215]: E0714 22:43:58.890668 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"573e0651-d8b7-4359-8549-45a022613024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:43:58.890769 kubelet[2215]: E0714 22:43:58.890684 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"573e0651-d8b7-4359-8549-45a022613024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-scf5h" podUID="573e0651-d8b7-4359-8549-45a022613024" Jul 14 22:43:59.112391 env[1320]: time="2025-07-14T22:43:59.112257343Z" level=error msg="Failed to destroy network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.112677 env[1320]: time="2025-07-14T22:43:59.112638904Z" level=error msg="encountered an error cleaning up failed sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.112722 env[1320]: time="2025-07-14T22:43:59.112695021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844f5b784b-xzh64,Uid:ececfe63-8e48-4463-becc-747d3684a68e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.113027 kubelet[2215]: E0714 22:43:59.112986 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.113108 kubelet[2215]: E0714 22:43:59.113057 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844f5b784b-xzh64" Jul 14 22:43:59.113165 kubelet[2215]: E0714 22:43:59.113121 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844f5b784b-xzh64" Jul 14 22:43:59.113221 kubelet[2215]: E0714 22:43:59.113191 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-844f5b784b-xzh64_calico-system(ececfe63-8e48-4463-becc-747d3684a68e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-844f5b784b-xzh64_calico-system(ececfe63-8e48-4463-becc-747d3684a68e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-844f5b784b-xzh64" podUID="ececfe63-8e48-4463-becc-747d3684a68e" Jul 14 22:43:59.222103 env[1320]: time="2025-07-14T22:43:59.222039291Z" level=error msg="Failed to destroy network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.222394 env[1320]: time="2025-07-14T22:43:59.222357092Z" level=error msg="encountered an error cleaning up failed sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.222434 env[1320]: time="2025-07-14T22:43:59.222413218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q9tk2,Uid:63689819-3628-4d96-bf6f-7f8f144f2164,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.222695 kubelet[2215]: E0714 22:43:59.222654 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.222758 kubelet[2215]: E0714 22:43:59.222719 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:59.222758 kubelet[2215]: E0714 22:43:59.222738 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-q9tk2" Jul 14 22:43:59.222813 kubelet[2215]: E0714 22:43:59.222780 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-q9tk2_calico-system(63689819-3628-4d96-bf6f-7f8f144f2164)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-q9tk2_calico-system(63689819-3628-4d96-bf6f-7f8f144f2164)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-q9tk2" podUID="63689819-3628-4d96-bf6f-7f8f144f2164" Jul 14 22:43:59.306118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b-shm.mount: Deactivated successfully. Jul 14 22:43:59.306240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087-shm.mount: Deactivated successfully. Jul 14 22:43:59.306322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc-shm.mount: Deactivated successfully. Jul 14 22:43:59.306431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897-shm.mount: Deactivated successfully. Jul 14 22:43:59.529141 env[1320]: time="2025-07-14T22:43:59.526846122Z" level=error msg="Failed to destroy network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.529140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d-shm.mount: Deactivated successfully. Jul 14 22:43:59.529370 env[1320]: time="2025-07-14T22:43:59.529295997Z" level=error msg="encountered an error cleaning up failed sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.529614 env[1320]: time="2025-07-14T22:43:59.529575265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx29x,Uid:26698740-5794-455a-b832-1e56047f0f19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.530859 kubelet[2215]: E0714 22:43:59.529912 2215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:43:59.530859 kubelet[2215]: E0714 22:43:59.530210 2215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:59.530859 kubelet[2215]: E0714 22:43:59.530247 2215 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lx29x" Jul 14 22:43:59.531014 kubelet[2215]: E0714 22:43:59.530293 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lx29x_calico-system(26698740-5794-455a-b832-1e56047f0f19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lx29x_calico-system(26698740-5794-455a-b832-1e56047f0f19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:43:59.837866 kubelet[2215]: I0714 22:43:59.837830 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:43:59.838438 env[1320]: time="2025-07-14T22:43:59.838404266Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:43:59.838686 kubelet[2215]: I0714 22:43:59.838559 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:43:59.838908 env[1320]: time="2025-07-14T22:43:59.838889975Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:43:59.840101 kubelet[2215]: I0714 22:43:59.840081 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:43:59.840646 env[1320]: time="2025-07-14T22:43:59.840622352Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:43:59.841627 kubelet[2215]: I0714 22:43:59.841603 2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:43:59.842102 env[1320]: time="2025-07-14T22:43:59.842072646Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:43:59.872197 env[1320]: time="2025-07-14T22:43:59.872112849Z" level=error msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" failed" error="failed to destroy network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:00.002272 env[1320]: time="2025-07-14T22:43:59.876287658Z" level=error msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" failed" error="failed to destroy network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:00.002272 env[1320]: time="2025-07-14T22:43:59.877688859Z" level=error msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" failed" error="failed to destroy network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:00.002272 env[1320]: time="2025-07-14T22:43:59.978228453Z" level=error msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" failed" error="failed to destroy network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:00.002486 kubelet[2215]: E0714 22:43:59.872384 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:00.002486 kubelet[2215]: E0714 22:43:59.872439 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b"} Jul 14 22:44:00.002486 kubelet[2215]: E0714 22:43:59.872486 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63689819-3628-4d96-bf6f-7f8f144f2164\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:00.002486 kubelet[2215]: E0714 22:43:59.872531 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63689819-3628-4d96-bf6f-7f8f144f2164\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-q9tk2" podUID="63689819-3628-4d96-bf6f-7f8f144f2164" Jul 14 22:44:00.002919 kubelet[2215]: E0714 22:43:59.876504 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:00.002919 kubelet[2215]: E0714 22:43:59.876551 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087"} Jul 14 22:44:00.002919 kubelet[2215]: E0714 22:43:59.876588 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:00.002919 kubelet[2215]: E0714 22:43:59.876613 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j6xgm" podUID="44d35327-7f5c-4584-8b0a-dbf8a90adea6" Jul 14 22:44:00.003094 kubelet[2215]: E0714 22:43:59.877790 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:00.003094 kubelet[2215]: E0714 22:43:59.877812 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d"} Jul 14 22:44:00.003094 kubelet[2215]: E0714 22:43:59.877834 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26698740-5794-455a-b832-1e56047f0f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:00.003094 kubelet[2215]: E0714 22:43:59.877853 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26698740-5794-455a-b832-1e56047f0f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:44:00.003263 kubelet[2215]: E0714 22:43:59.978488 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:00.003263 kubelet[2215]: E0714 22:43:59.978546 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc"} Jul 14 22:44:00.003263 kubelet[2215]: E0714 22:43:59.978582 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ececfe63-8e48-4463-becc-747d3684a68e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:00.003263 kubelet[2215]: E0714 22:43:59.978602 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ececfe63-8e48-4463-becc-747d3684a68e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-844f5b784b-xzh64" podUID="ececfe63-8e48-4463-becc-747d3684a68e" Jul 14 22:44:06.698098 kernel: audit: type=1130 audit(1752533046.689:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.12:22-10.0.0.1:44484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:06.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.12:22-10.0.0.1:44484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:06.691354 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:44484.service. Jul 14 22:44:06.762000 audit[3420]: USER_ACCT pid=3420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.764029 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 44484 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:06.770000 audit[3420]: CRED_ACQ pid=3420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.775709 kernel: audit: type=1101 audit(1752533046.762:320): pid=3420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.775761 kernel: audit: type=1103 audit(1752533046.770:321): pid=3420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.777545 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:06.778464 kernel: audit: type=1006 audit(1752533046.770:322): pid=3420 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 14 22:44:06.770000 audit[3420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd3779a90 a2=3 a3=0 items=0 ppid=1 pid=3420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:06.782972 kernel: audit: type=1300 audit(1752533046.770:322): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd3779a90 a2=3 a3=0 items=0 ppid=1 pid=3420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:06.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:06.784985 kernel: audit: type=1327 audit(1752533046.770:322): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:06.785537 systemd-logind[1309]: New session 8 of user core. Jul 14 22:44:06.786230 systemd[1]: Started session-8.scope. Jul 14 22:44:06.789000 audit[3420]: USER_START pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.790000 audit[3423]: CRED_ACQ pid=3423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.798316 kernel: audit: type=1105 audit(1752533046.789:323): pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:06.798377 kernel: audit: type=1103 audit(1752533046.790:324): pid=3423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:07.133536 sshd[3420]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:07.144027 kernel: audit: type=1106 audit(1752533047.133:325): pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:07.144169 kernel: audit: type=1104 audit(1752533047.133:326): pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:07.133000 audit[3420]: USER_END pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:07.133000 audit[3420]: CRED_DISP pid=3420 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:07.135923 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:44484.service: Deactivated successfully. Jul 14 22:44:07.136954 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:44:07.137419 systemd-logind[1309]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:44:07.138121 systemd-logind[1309]: Removed session 8. Jul 14 22:44:07.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.12:22-10.0.0.1:44484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:07.525186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679010880.mount: Deactivated successfully. Jul 14 22:44:10.649586 env[1320]: time="2025-07-14T22:44:10.649541502Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:44:10.650181 env[1320]: time="2025-07-14T22:44:10.649548214Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:44:10.650254 env[1320]: time="2025-07-14T22:44:10.649556019Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:44:11.375273 env[1320]: time="2025-07-14T22:44:11.375217178Z" level=error msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" failed" error="failed to destroy network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:11.375530 env[1320]: time="2025-07-14T22:44:11.375461591Z" level=error msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" failed" error="failed to destroy network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:11.375839 kubelet[2215]: E0714 22:44:11.375636 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:44:11.375839 kubelet[2215]: E0714 22:44:11.375705 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4"} Jul 14 22:44:11.375839 kubelet[2215]: E0714 22:44:11.375750 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28f9fef2-ff3a-4233-92f1-c94976e9b138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:11.375839 kubelet[2215]: E0714 22:44:11.375771 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28f9fef2-ff3a-4233-92f1-c94976e9b138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" podUID="28f9fef2-ff3a-4233-92f1-c94976e9b138" Jul 14 22:44:11.376694 kubelet[2215]: E0714 22:44:11.376596 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:44:11.376694 kubelet[2215]: E0714 22:44:11.376624 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897"} Jul 14 22:44:11.376694 kubelet[2215]: E0714 22:44:11.376650 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"573e0651-d8b7-4359-8549-45a022613024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:11.376694 kubelet[2215]: E0714 22:44:11.376666 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"573e0651-d8b7-4359-8549-45a022613024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-scf5h" podUID="573e0651-d8b7-4359-8549-45a022613024" Jul 14 22:44:11.381341 env[1320]: time="2025-07-14T22:44:11.381286004Z" level=error msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" failed" error="failed to destroy network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:11.381501 kubelet[2215]: E0714 22:44:11.381471 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:11.381501 kubelet[2215]: E0714 22:44:11.381498 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d"} Jul 14 22:44:11.381501 kubelet[2215]: E0714 22:44:11.381516 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26698740-5794-455a-b832-1e56047f0f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:11.381726 kubelet[2215]: E0714 22:44:11.381533 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26698740-5794-455a-b832-1e56047f0f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lx29x" podUID="26698740-5794-455a-b832-1e56047f0f19" Jul 14 22:44:11.649986 env[1320]: time="2025-07-14T22:44:11.649855406Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:44:11.670757 env[1320]: time="2025-07-14T22:44:11.670703821Z" level=error msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" failed" error="failed to destroy network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:11.670997 kubelet[2215]: E0714 22:44:11.670928 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:44:11.671064 kubelet[2215]: E0714 22:44:11.671012 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803"} Jul 14 22:44:11.671064 kubelet[2215]: E0714 22:44:11.671057 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:11.671153 kubelet[2215]: E0714 22:44:11.671082 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" podUID="cb90fe31-1c87-48f5-81dd-a9f3638c4eaf" Jul 14 22:44:12.116656 env[1320]: time="2025-07-14T22:44:12.116601403Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:12.137349 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:39558.service. Jul 14 22:44:12.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.12:22-10.0.0.1:39558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:12.138384 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:44:12.138512 kernel: audit: type=1130 audit(1752533052.136:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.12:22-10.0.0.1:39558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:12.179442 env[1320]: time="2025-07-14T22:44:12.179384048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:12.179000 audit[3513]: USER_ACCT pid=3513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.181016 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:12.183801 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:12.182000 audit[3513]: CRED_ACQ pid=3513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.189337 kernel: audit: type=1101 audit(1752533052.179:329): pid=3513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.189425 kernel: audit: type=1103 audit(1752533052.182:330): pid=3513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.189454 kernel: audit: type=1006 audit(1752533052.182:331): pid=3513 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 14 22:44:12.192804 kernel: audit: type=1300 audit(1752533052.182:331): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1e1f52b0 a2=3 a3=0 items=0 ppid=1 pid=3513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:12.182000 audit[3513]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1e1f52b0 a2=3 a3=0 items=0 ppid=1 pid=3513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:12.191878 systemd-logind[1309]: New session 9 of user core. Jul 14 22:44:12.192059 systemd[1]: Started session-9.scope. Jul 14 22:44:12.198455 kernel: audit: type=1327 audit(1752533052.182:331): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:12.182000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:12.196000 audit[3513]: USER_START pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.203849 kernel: audit: type=1105 audit(1752533052.196:332): pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.203923 kernel: audit: type=1103 audit(1752533052.198:333): pid=3516 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.198000 audit[3516]: CRED_ACQ pid=3516 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.252626 env[1320]: time="2025-07-14T22:44:12.252583109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:12.322410 env[1320]: time="2025-07-14T22:44:12.322363914Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:12.322613 env[1320]: time="2025-07-14T22:44:12.322564643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:44:12.334634 env[1320]: time="2025-07-14T22:44:12.334591902Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:44:12.445562 sshd[3513]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:12.445000 audit[3513]: USER_END pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.447952 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:39558.service: Deactivated successfully. Jul 14 22:44:12.448672 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:44:12.449536 systemd-logind[1309]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:44:12.450302 systemd-logind[1309]: Removed session 9. Jul 14 22:44:12.445000 audit[3513]: CRED_DISP pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.455863 kernel: audit: type=1106 audit(1752533052.445:334): pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.456000 kernel: audit: type=1104 audit(1752533052.445:335): pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:12.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.12:22-10.0.0.1:39558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:12.650188 env[1320]: time="2025-07-14T22:44:12.650115789Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:44:12.650188 env[1320]: time="2025-07-14T22:44:12.650115529Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:44:12.678072 env[1320]: time="2025-07-14T22:44:12.677998600Z" level=error msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" failed" error="failed to destroy network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:12.678635 kubelet[2215]: E0714 22:44:12.678458 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:44:12.678635 kubelet[2215]: E0714 22:44:12.678516 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958"} Jul 14 22:44:12.678635 kubelet[2215]: E0714 22:44:12.678557 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e904006e-54c2-458a-afd4-0856ab783ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:12.678635 kubelet[2215]: E0714 22:44:12.678584 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e904006e-54c2-458a-afd4-0856ab783ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" podUID="e904006e-54c2-458a-afd4-0856ab783ed3" Jul 14 22:44:12.681809 env[1320]: time="2025-07-14T22:44:12.681741329Z" level=error msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" failed" error="failed to destroy network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:12.681999 kubelet[2215]: E0714 22:44:12.681948 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:12.682053 kubelet[2215]: E0714 22:44:12.682011 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc"} Jul 14 22:44:12.682053 kubelet[2215]: E0714 22:44:12.682043 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ececfe63-8e48-4463-becc-747d3684a68e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:12.682123 kubelet[2215]: E0714 22:44:12.682063 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ececfe63-8e48-4463-becc-747d3684a68e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-844f5b784b-xzh64" podUID="ececfe63-8e48-4463-becc-747d3684a68e" Jul 14 22:44:13.470326 env[1320]: time="2025-07-14T22:44:13.470250169Z" level=info msg="CreateContainer within sandbox \"06a08f849a4480f50f0bd576f92954cdd1a7726447121df2c0118c75ddff0392\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e264eaf1013788458bc3ce6fad8414cae4f82efc46a599a1fc04a29ba23be15\"" Jul 14 22:44:13.470987 env[1320]: time="2025-07-14T22:44:13.470920425Z" level=info msg="StartContainer for \"2e264eaf1013788458bc3ce6fad8414cae4f82efc46a599a1fc04a29ba23be15\"" Jul 14 22:44:15.133378 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:44:15.133555 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:44:15.206472 env[1320]: time="2025-07-14T22:44:15.206408856Z" level=info msg="StartContainer for \"2e264eaf1013788458bc3ce6fad8414cae4f82efc46a599a1fc04a29ba23be15\" returns successfully" Jul 14 22:44:15.207112 kubelet[2215]: E0714 22:44:15.207078 2215 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.558s" Jul 14 22:44:15.207706 env[1320]: time="2025-07-14T22:44:15.207669046Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:44:15.207984 env[1320]: time="2025-07-14T22:44:15.207943494Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:44:15.227556 env[1320]: time="2025-07-14T22:44:15.227490598Z" level=error msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" failed" error="failed to destroy network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:15.227783 kubelet[2215]: E0714 22:44:15.227733 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:15.227885 kubelet[2215]: E0714 22:44:15.227803 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b"} Jul 14 22:44:15.227885 kubelet[2215]: E0714 22:44:15.227859 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63689819-3628-4d96-bf6f-7f8f144f2164\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:15.228065 kubelet[2215]: E0714 22:44:15.227892 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63689819-3628-4d96-bf6f-7f8f144f2164\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-q9tk2" podUID="63689819-3628-4d96-bf6f-7f8f144f2164" Jul 14 22:44:15.230526 env[1320]: time="2025-07-14T22:44:15.230473652Z" level=error msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" failed" error="failed to destroy network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:44:15.230646 kubelet[2215]: E0714 22:44:15.230605 2215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:15.230701 kubelet[2215]: E0714 22:44:15.230645 2215 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087"} Jul 14 22:44:15.230701 kubelet[2215]: E0714 22:44:15.230683 2215 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:44:15.230795 kubelet[2215]: E0714 22:44:15.230698 2215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44d35327-7f5c-4584-8b0a-dbf8a90adea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j6xgm" podUID="44d35327-7f5c-4584-8b0a-dbf8a90adea6" Jul 14 22:44:17.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.12:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:17.448424 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:39578.service. Jul 14 22:44:17.483990 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:44:17.484091 kernel: audit: type=1130 audit(1752533057.447:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.12:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:17.521000 audit[3698]: USER_ACCT pid=3698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.522766 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 39578 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:17.524462 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:17.523000 audit[3698]: CRED_ACQ pid=3698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.529788 systemd-logind[1309]: New session 10 of user core. Jul 14 22:44:17.530675 systemd[1]: Started session-10.scope. Jul 14 22:44:17.531224 kernel: audit: type=1101 audit(1752533057.521:338): pid=3698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.531352 kernel: audit: type=1103 audit(1752533057.523:339): pid=3698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.531396 kernel: audit: type=1006 audit(1752533057.523:340): pid=3698 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 14 22:44:17.523000 audit[3698]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfeaf3b30 a2=3 a3=0 items=0 ppid=1 pid=3698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:17.537822 kernel: audit: type=1300 audit(1752533057.523:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfeaf3b30 a2=3 a3=0 items=0 ppid=1 pid=3698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:17.537874 kernel: audit: type=1327 audit(1752533057.523:340): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:17.523000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:17.534000 audit[3698]: USER_START pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.543441 kernel: audit: type=1105 audit(1752533057.534:341): pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.543487 kernel: audit: type=1103 audit(1752533057.535:342): pid=3701 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.535000 audit[3701]: CRED_ACQ pid=3701 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.918818 sshd[3698]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:17.918000 audit[3698]: USER_END pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.920730 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:39578.service: Deactivated successfully. Jul 14 22:44:17.921586 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:44:17.921919 systemd-logind[1309]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:44:17.922533 systemd-logind[1309]: Removed session 10. Jul 14 22:44:17.918000 audit[3698]: CRED_DISP pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.968773 kernel: audit: type=1106 audit(1752533057.918:343): pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.968826 kernel: audit: type=1104 audit(1752533057.918:344): pid=3698 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:17.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.12:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:18.996773 kubelet[2215]: I0714 22:44:18.996258 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7b699" podStartSLOduration=7.654444675 podStartE2EDuration="45.99623605s" podCreationTimestamp="2025-07-14 22:43:33 +0000 UTC" firstStartedPulling="2025-07-14 22:43:33.981820205 +0000 UTC m=+26.423162348" lastFinishedPulling="2025-07-14 22:44:12.32361158 +0000 UTC m=+64.764953723" observedRunningTime="2025-07-14 22:44:16.814480442 +0000 UTC m=+69.255822585" watchObservedRunningTime="2025-07-14 22:44:18.99623605 +0000 UTC m=+71.437578193" Jul 14 22:44:19.001656 env[1320]: time="2025-07-14T22:44:19.001605588Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.272 [INFO][3732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.272 [INFO][3732] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" iface="eth0" netns="/var/run/netns/cni-31426509-a26d-964d-6e17-bc19f8f588b3" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.272 [INFO][3732] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" iface="eth0" netns="/var/run/netns/cni-31426509-a26d-964d-6e17-bc19f8f588b3" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.283 [INFO][3732] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" iface="eth0" netns="/var/run/netns/cni-31426509-a26d-964d-6e17-bc19f8f588b3" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.283 [INFO][3732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.283 [INFO][3732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.326 [INFO][3741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.326 [INFO][3741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.326 [INFO][3741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.360 [WARNING][3741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.360 [INFO][3741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.362 [INFO][3741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:19.364650 env[1320]: 2025-07-14 22:44:19.363 [INFO][3732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:44:19.365259 env[1320]: time="2025-07-14T22:44:19.364796601Z" level=info msg="TearDown network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" successfully" Jul 14 22:44:19.365259 env[1320]: time="2025-07-14T22:44:19.364824613Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" returns successfully" Jul 14 22:44:19.367206 systemd[1]: run-netns-cni\x2d31426509\x2da26d\x2d964d\x2d6e17\x2dbc19f8f588b3.mount: Deactivated successfully. Jul 14 22:44:19.524359 kubelet[2215]: I0714 22:44:19.524321 2215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ececfe63-8e48-4463-becc-747d3684a68e-whisker-ca-bundle\") pod \"ececfe63-8e48-4463-becc-747d3684a68e\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " Jul 14 22:44:19.524359 kubelet[2215]: I0714 22:44:19.524370 2215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hb4\" (UniqueName: \"kubernetes.io/projected/ececfe63-8e48-4463-becc-747d3684a68e-kube-api-access-52hb4\") pod \"ececfe63-8e48-4463-becc-747d3684a68e\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " Jul 14 22:44:19.524604 kubelet[2215]: I0714 22:44:19.524396 2215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ececfe63-8e48-4463-becc-747d3684a68e-whisker-backend-key-pair\") pod \"ececfe63-8e48-4463-becc-747d3684a68e\" (UID: \"ececfe63-8e48-4463-becc-747d3684a68e\") " Jul 14 22:44:19.524689 kubelet[2215]: I0714 22:44:19.524665 2215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ececfe63-8e48-4463-becc-747d3684a68e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ececfe63-8e48-4463-becc-747d3684a68e" (UID: "ececfe63-8e48-4463-becc-747d3684a68e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:19.526801 kubelet[2215]: I0714 22:44:19.526762 2215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ececfe63-8e48-4463-becc-747d3684a68e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ececfe63-8e48-4463-becc-747d3684a68e" (UID: "ececfe63-8e48-4463-becc-747d3684a68e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:44:19.526874 kubelet[2215]: I0714 22:44:19.526854 2215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ececfe63-8e48-4463-becc-747d3684a68e-kube-api-access-52hb4" (OuterVolumeSpecName: "kube-api-access-52hb4") pod "ececfe63-8e48-4463-becc-747d3684a68e" (UID: "ececfe63-8e48-4463-becc-747d3684a68e"). InnerVolumeSpecName "kube-api-access-52hb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:19.528680 systemd[1]: var-lib-kubelet-pods-ececfe63\x2d8e48\x2d4463\x2dbecc\x2d747d3684a68e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d52hb4.mount: Deactivated successfully. Jul 14 22:44:19.528804 systemd[1]: var-lib-kubelet-pods-ececfe63\x2d8e48\x2d4463\x2dbecc\x2d747d3684a68e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:44:19.625566 kubelet[2215]: I0714 22:44:19.625430 2215 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ececfe63-8e48-4463-becc-747d3684a68e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:19.625566 kubelet[2215]: I0714 22:44:19.625468 2215 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ececfe63-8e48-4463-becc-747d3684a68e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:19.625566 kubelet[2215]: I0714 22:44:19.625478 2215 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hb4\" (UniqueName: \"kubernetes.io/projected/ececfe63-8e48-4463-becc-747d3684a68e-kube-api-access-52hb4\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:20.446000 audit[3835]: AVC avc: denied { write } for pid=3835 comm="tee" name="fd" dev="proc" ino=26033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.446000 audit[3835]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd95a3f7eb a2=241 a3=1b6 items=1 ppid=3776 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.446000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 14 22:44:20.446000 audit: PATH item=0 name="/dev/fd/63" inode=26918 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.449000 audit[3826]: AVC avc: denied { write } for pid=3826 comm="tee" name="fd" dev="proc" ino=26035 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.449000 audit[3826]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe403687e9 a2=241 a3=1b6 items=1 ppid=3777 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.449000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 14 22:44:20.449000 audit: PATH item=0 name="/dev/fd/63" inode=26908 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.451000 audit[3843]: AVC avc: denied { write } for pid=3843 comm="tee" name="fd" dev="proc" ino=26039 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.451000 audit[3843]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedd88d7e9 a2=241 a3=1b6 items=1 ppid=3771 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.451000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 14 22:44:20.451000 audit: PATH item=0 name="/dev/fd/63" inode=27652 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.467000 audit[3838]: AVC avc: denied { write } for pid=3838 comm="tee" name="fd" dev="proc" ino=26924 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.467000 audit[3838]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffefa2a37da a2=241 a3=1b6 items=1 ppid=3778 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.467000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 14 22:44:20.467000 audit: PATH item=0 name="/dev/fd/63" inode=26921 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.476000 audit[3830]: AVC avc: denied { write } for pid=3830 comm="tee" name="fd" dev="proc" ino=27657 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.476000 audit[3830]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd16a47e9 a2=241 a3=1b6 items=1 ppid=3784 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.480000 audit[3851]: AVC avc: denied { write } for pid=3851 comm="tee" name="fd" dev="proc" ino=26046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.480000 audit[3851]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe69dcb7ea a2=241 a3=1b6 items=1 ppid=3787 pid=3851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.480000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 14 22:44:20.480000 audit: PATH item=0 name="/dev/fd/63" inode=26926 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.476000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 14 22:44:20.476000 audit: PATH item=0 name="/dev/fd/63" inode=26915 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.485000 audit[3854]: AVC avc: denied { write } for pid=3854 comm="tee" name="fd" dev="proc" ino=26050 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 14 22:44:20.485000 audit[3854]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe75b157d9 a2=241 a3=1b6 items=1 ppid=3772 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:20.485000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 14 22:44:20.485000 audit: PATH item=0 name="/dev/fd/63" inode=27662 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:44:20.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 14 22:44:20.934237 kubelet[2215]: I0714 22:44:20.934177 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98t2t\" (UniqueName: \"kubernetes.io/projected/0837f224-bd9d-46a3-8c12-c953ec709723-kube-api-access-98t2t\") pod \"whisker-5f6b6647b6-8j8t7\" (UID: \"0837f224-bd9d-46a3-8c12-c953ec709723\") " pod="calico-system/whisker-5f6b6647b6-8j8t7" Jul 14 22:44:20.934237 kubelet[2215]: I0714 22:44:20.934238 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0837f224-bd9d-46a3-8c12-c953ec709723-whisker-ca-bundle\") pod \"whisker-5f6b6647b6-8j8t7\" (UID: \"0837f224-bd9d-46a3-8c12-c953ec709723\") " pod="calico-system/whisker-5f6b6647b6-8j8t7" Jul 14 22:44:20.934683 kubelet[2215]: I0714 22:44:20.934260 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0837f224-bd9d-46a3-8c12-c953ec709723-whisker-backend-key-pair\") pod \"whisker-5f6b6647b6-8j8t7\" (UID: \"0837f224-bd9d-46a3-8c12-c953ec709723\") " pod="calico-system/whisker-5f6b6647b6-8j8t7" Jul 14 22:44:21.122617 env[1320]: time="2025-07-14T22:44:21.122558475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f6b6647b6-8j8t7,Uid:0837f224-bd9d-46a3-8c12-c953ec709723,Namespace:calico-system,Attempt:0,}" Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit: BPF prog-id=10 op=LOAD Jul 14 22:44:21.268000 audit[3892]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff106aeb30 a2=98 a3=1fffffffffffffff items=0 ppid=3781 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.268000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 22:44:21.268000 audit: BPF prog-id=10 op=UNLOAD Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit: BPF prog-id=11 op=LOAD Jul 14 22:44:21.268000 audit[3892]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff106aea10 a2=94 a3=3 items=0 ppid=3781 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.268000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 22:44:21.268000 audit: BPF prog-id=11 op=UNLOAD Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { bpf } for pid=3892 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit: BPF prog-id=12 op=LOAD Jul 14 22:44:21.268000 audit[3892]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff106aea50 a2=94 a3=7fff106aec30 items=0 ppid=3781 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.268000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 22:44:21.268000 audit: BPF prog-id=12 op=UNLOAD Jul 14 22:44:21.268000 audit[3892]: AVC avc: denied { perfmon } for pid=3892 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.268000 audit[3892]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff106aeb20 a2=50 a3=a000000085 items=0 ppid=3781 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.268000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit: BPF prog-id=13 op=LOAD Jul 14 22:44:21.269000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb7c7b080 a2=98 a3=3 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.269000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.269000 audit: BPF prog-id=13 op=UNLOAD Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.269000 audit: BPF prog-id=14 op=LOAD Jul 14 22:44:21.269000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb7c7ae70 a2=94 a3=54428f items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.269000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.270000 audit: BPF prog-id=14 op=UNLOAD Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.270000 audit: BPF prog-id=15 op=LOAD Jul 14 22:44:21.270000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb7c7aea0 a2=94 a3=2 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.270000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.270000 audit: BPF prog-id=15 op=UNLOAD Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit: BPF prog-id=16 op=LOAD Jul 14 22:44:21.375000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb7c7ad60 a2=94 a3=1 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.375000 audit: BPF prog-id=16 op=UNLOAD Jul 14 22:44:21.375000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.375000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcb7c7ae30 a2=50 a3=7ffcb7c7af10 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7ad70 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb7c7ada0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb7c7acb0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7adc0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7ada0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7ad90 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7adc0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb7c7ada0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb7c7adc0 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb7c7ad90 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.384000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.384000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb7c7ae00 a2=28 a3=0 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.384000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb7c7abb0 a2=50 a3=1 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit: BPF prog-id=17 op=LOAD Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb7c7abb0 a2=94 a3=5 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit: BPF prog-id=17 op=UNLOAD Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb7c7ac60 a2=50 a3=1 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcb7c7ad80 a2=4 a3=38 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { confidentiality } for pid=3893 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb7c7add0 a2=94 a3=6 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.385000 audit[3893]: AVC avc: denied { confidentiality } for pid=3893 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.385000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb7c7a580 a2=94 a3=88 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.385000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { perfmon } for pid=3893 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { bpf } for pid=3893 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.386000 audit[3893]: AVC avc: denied { confidentiality } for pid=3893 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.386000 audit[3893]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb7c7a580 a2=94 a3=88 items=0 ppid=3781 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.386000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit: BPF prog-id=18 op=LOAD Jul 14 22:44:21.393000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b3bdc40 a2=98 a3=1999999999999999 items=0 ppid=3781 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.393000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 22:44:21.393000 audit: BPF prog-id=18 op=UNLOAD Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit: BPF prog-id=19 op=LOAD Jul 14 22:44:21.393000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b3bdb20 a2=94 a3=ffff items=0 ppid=3781 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.393000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 22:44:21.393000 audit: BPF prog-id=19 op=UNLOAD Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { perfmon } for pid=3896 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit[3896]: AVC avc: denied { bpf } for pid=3896 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.393000 audit: BPF prog-id=20 op=LOAD Jul 14 22:44:21.393000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b3bdb60 a2=94 a3=7ffd9b3bdd40 items=0 ppid=3781 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.393000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 14 22:44:21.393000 audit: BPF prog-id=20 op=UNLOAD Jul 14 22:44:21.440936 systemd-networkd[1105]: vxlan.calico: Link UP Jul 14 22:44:21.440945 systemd-networkd[1105]: vxlan.calico: Gained carrier Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit: BPF prog-id=21 op=LOAD Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfe769520 a2=98 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit: BPF prog-id=21 op=UNLOAD Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit: BPF prog-id=22 op=LOAD Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfe769330 a2=94 a3=54428f items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit: BPF prog-id=22 op=UNLOAD Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit: BPF prog-id=23 op=LOAD Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfe769360 a2=94 a3=2 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit: BPF prog-id=23 op=UNLOAD Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe769230 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcfe769260 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcfe769170 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe769280 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe769260 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe769250 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe769280 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.462000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.462000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcfe769260 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcfe769280 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcfe769250 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcfe7692c0 a2=28 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit: BPF prog-id=24 op=LOAD Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcfe769130 a2=94 a3=0 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit: BPF prog-id=24 op=UNLOAD Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffcfe769120 a2=50 a3=2800 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffcfe769120 a2=50 a3=2800 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit: BPF prog-id=25 op=LOAD Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcfe768940 a2=94 a3=2 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.463000 audit: BPF prog-id=25 op=UNLOAD Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { perfmon } for pid=3927 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit[3927]: AVC avc: denied { bpf } for pid=3927 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.463000 audit: BPF prog-id=26 op=LOAD Jul 14 22:44:21.463000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcfe768a40 a2=94 a3=30 items=0 ppid=3781 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.472000 audit: BPF prog-id=27 op=LOAD Jul 14 22:44:21.472000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe49436ca0 a2=98 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.472000 audit: BPF prog-id=27 op=UNLOAD Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit: BPF prog-id=28 op=LOAD Jul 14 22:44:21.473000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49436a90 a2=94 a3=54428f items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.473000 audit: BPF prog-id=28 op=UNLOAD Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.473000 audit: BPF prog-id=29 op=LOAD Jul 14 22:44:21.473000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49436ac0 a2=94 a3=2 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.473000 audit: BPF prog-id=29 op=UNLOAD Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit: BPF prog-id=30 op=LOAD Jul 14 22:44:21.574000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe49436980 a2=94 a3=1 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.574000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.574000 audit: BPF prog-id=30 op=UNLOAD Jul 14 22:44:21.574000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.574000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe49436a50 a2=50 a3=7ffe49436b30 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.574000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe49436990 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe494369c0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe494368d0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe494369e0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe494369c0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe494369b0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe494369e0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe494369c0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe494369e0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe494369b0 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe49436a20 a2=28 a3=0 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe494367d0 a2=50 a3=1 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.582000 audit: BPF prog-id=31 op=LOAD Jul 14 22:44:21.582000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe494367d0 a2=94 a3=5 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit: BPF prog-id=31 op=UNLOAD Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe49436880 a2=50 a3=1 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe494369a0 a2=4 a3=38 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { confidentiality } for pid=3936 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe494369f0 a2=94 a3=6 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { confidentiality } for pid=3936 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe494361a0 a2=94 a3=88 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { perfmon } for pid=3936 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { confidentiality } for pid=3936 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe494361a0 a2=94 a3=88 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.583000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.583000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe49437bd0 a2=10 a3=208 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.583000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.584000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.584000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe49437a70 a2=10 a3=3 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.584000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.584000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.584000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe49437a10 a2=10 a3=3 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.584000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.584000 audit[3936]: AVC avc: denied { bpf } for pid=3936 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 14 22:44:21.584000 audit[3936]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe49437a10 a2=10 a3=7 items=0 ppid=3781 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.584000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 14 22:44:21.590000 audit: BPF prog-id=26 op=UNLOAD Jul 14 22:44:21.641000 audit[3959]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:21.641000 audit[3959]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff46fad000 a2=0 a3=7fff46facfec items=0 ppid=3781 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.641000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:21.651281 kubelet[2215]: I0714 22:44:21.651244 2215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ececfe63-8e48-4463-becc-747d3684a68e" path="/var/lib/kubelet/pods/ececfe63-8e48-4463-becc-747d3684a68e/volumes" Jul 14 22:44:21.703000 audit[3962]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=3962 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:21.703000 audit[3962]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffc1de8b1c0 a2=0 a3=7ffc1de8b1ac items=0 ppid=3781 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.703000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:21.703000 audit[3957]: NETFILTER_CFG table=nat:103 family=2 entries=15 op=nft_register_chain pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:21.703000 audit[3957]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff401b6210 a2=0 a3=565024f72000 items=0 ppid=3781 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.703000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:21.705000 audit[3958]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=3958 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:21.705000 audit[3958]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff261a04f0 a2=0 a3=7fff261a04dc items=0 ppid=3781 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.705000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:21.864988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliad98cf9e739: link becomes ready Jul 14 22:44:21.866323 systemd-networkd[1105]: caliad98cf9e739: Link UP Jul 14 22:44:21.866575 systemd-networkd[1105]: caliad98cf9e739: Gained carrier Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.782 [INFO][3966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0 whisker-5f6b6647b6- calico-system 0837f224-bd9d-46a3-8c12-c953ec709723 1016 0 2025-07-14 22:44:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f6b6647b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f6b6647b6-8j8t7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad98cf9e739 [] [] }} ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.782 [INFO][3966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.806 [INFO][3983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" HandleID="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Workload="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.807 [INFO][3983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" HandleID="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Workload="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f6b6647b6-8j8t7", "timestamp":"2025-07-14 22:44:21.806921491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.807 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.807 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.807 [INFO][3983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.814 [INFO][3983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.819 [INFO][3983] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.823 [INFO][3983] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.824 [INFO][3983] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.826 [INFO][3983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.826 [INFO][3983] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.827 [INFO][3983] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.838 [INFO][3983] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.857 [INFO][3983] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.857 [INFO][3983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" host="localhost" Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.857 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:21.898152 env[1320]: 2025-07-14 22:44:21.857 [INFO][3983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" HandleID="k8s-pod-network.73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Workload="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.859 [INFO][3966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0", GenerateName:"whisker-5f6b6647b6-", Namespace:"calico-system", SelfLink:"", UID:"0837f224-bd9d-46a3-8c12-c953ec709723", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 44, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f6b6647b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f6b6647b6-8j8t7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad98cf9e739", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.859 [INFO][3966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.859 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad98cf9e739 ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.870 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.870 [INFO][3966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0", GenerateName:"whisker-5f6b6647b6-", Namespace:"calico-system", SelfLink:"", UID:"0837f224-bd9d-46a3-8c12-c953ec709723", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 44, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f6b6647b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f", Pod:"whisker-5f6b6647b6-8j8t7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad98cf9e739", MAC:"82:af:05:9f:37:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:21.899056 env[1320]: 2025-07-14 22:44:21.895 [INFO][3966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f" Namespace="calico-system" Pod="whisker-5f6b6647b6-8j8t7" WorkloadEndpoint="localhost-k8s-whisker--5f6b6647b6--8j8t7-eth0" Jul 14 22:44:21.914000 audit[4000]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=4000 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:21.914000 audit[4000]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffedeceba10 a2=0 a3=7ffedeceb9fc items=0 ppid=3781 pid=4000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:21.914000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:21.933197 env[1320]: time="2025-07-14T22:44:21.933136107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:21.933197 env[1320]: time="2025-07-14T22:44:21.933182836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:21.933197 env[1320]: time="2025-07-14T22:44:21.933193085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:21.933385 env[1320]: time="2025-07-14T22:44:21.933344200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f pid=4011 runtime=io.containerd.runc.v2 Jul 14 22:44:21.955552 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:21.976770 env[1320]: time="2025-07-14T22:44:21.976729572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f6b6647b6-8j8t7,Uid:0837f224-bd9d-46a3-8c12-c953ec709723,Namespace:calico-system,Attempt:0,} returns sandbox id \"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f\"" Jul 14 22:44:21.978581 env[1320]: time="2025-07-14T22:44:21.978530431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:44:22.649733 kubelet[2215]: E0714 22:44:22.649654 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:22.650217 env[1320]: time="2025-07-14T22:44:22.649811417Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:44:22.650217 env[1320]: time="2025-07-14T22:44:22.649811407Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.714 [INFO][4071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.715 [INFO][4071] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" iface="eth0" netns="/var/run/netns/cni-0eeb2294-f434-9f23-d5ae-5bb8e705d83f" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.716 [INFO][4071] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" iface="eth0" netns="/var/run/netns/cni-0eeb2294-f434-9f23-d5ae-5bb8e705d83f" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.716 [INFO][4071] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" iface="eth0" netns="/var/run/netns/cni-0eeb2294-f434-9f23-d5ae-5bb8e705d83f" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.716 [INFO][4071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.716 [INFO][4071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.736 [INFO][4087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.737 [INFO][4087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.737 [INFO][4087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.745 [WARNING][4087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.745 [INFO][4087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.746 [INFO][4087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:22.750271 env[1320]: 2025-07-14 22:44:22.748 [INFO][4071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:44:22.753411 env[1320]: time="2025-07-14T22:44:22.750360645Z" level=info msg="TearDown network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" successfully" Jul 14 22:44:22.753411 env[1320]: time="2025-07-14T22:44:22.750395541Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" returns successfully" Jul 14 22:44:22.753411 env[1320]: time="2025-07-14T22:44:22.751946118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-scf5h,Uid:573e0651-d8b7-4359-8549-45a022613024,Namespace:kube-system,Attempt:1,}" Jul 14 22:44:22.752766 systemd[1]: run-netns-cni\x2d0eeb2294\x2df434\x2d9f23\x2dd5ae\x2d5bb8e705d83f.mount: Deactivated successfully. Jul 14 22:44:22.753780 kubelet[2215]: E0714 22:44:22.750685 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" iface="eth0" netns="/var/run/netns/cni-2c490572-8218-ad5c-7108-c7332a8aa41e" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" iface="eth0" netns="/var/run/netns/cni-2c490572-8218-ad5c-7108-c7332a8aa41e" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" iface="eth0" netns="/var/run/netns/cni-2c490572-8218-ad5c-7108-c7332a8aa41e" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.736 [INFO][4070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.763 [INFO][4095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.763 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.763 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.786 [WARNING][4095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.786 [INFO][4095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.788 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:22.791946 env[1320]: 2025-07-14 22:44:22.790 [INFO][4070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:44:22.792407 env[1320]: time="2025-07-14T22:44:22.792155900Z" level=info msg="TearDown network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" successfully" Jul 14 22:44:22.792407 env[1320]: time="2025-07-14T22:44:22.792194403Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" returns successfully" Jul 14 22:44:22.794489 systemd[1]: run-netns-cni\x2d2c490572\x2d8218\x2dad5c\x2d7108\x2dc7332a8aa41e.mount: Deactivated successfully. Jul 14 22:44:22.795434 env[1320]: time="2025-07-14T22:44:22.795395455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-7hdt4,Uid:28f9fef2-ff3a-4233-92f1-c94976e9b138,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:44:22.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.12:22-10.0.0.1:34054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:22.921561 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:34054.service. Jul 14 22:44:22.923597 kernel: kauditd_printk_skb: 561 callbacks suppressed Jul 14 22:44:22.923868 kernel: audit: type=1130 audit(1752533062.920:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.12:22-10.0.0.1:34054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:22.984000 audit[4105]: USER_ACCT pid=4105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:22.985911 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 34054 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:22.988595 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:22.985000 audit[4105]: CRED_ACQ pid=4105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:22.993557 kernel: audit: type=1101 audit(1752533062.984:457): pid=4105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:22.993693 kernel: audit: type=1103 audit(1752533062.985:458): pid=4105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:22.994769 systemd[1]: Started session-11.scope. Jul 14 22:44:22.995205 systemd-logind[1309]: New session 11 of user core. Jul 14 22:44:22.998138 kernel: audit: type=1006 audit(1752533062.985:459): pid=4105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 14 22:44:22.998177 kernel: audit: type=1300 audit(1752533062.985:459): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea723a2f0 a2=3 a3=0 items=0 ppid=1 pid=4105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:22.985000 audit[4105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea723a2f0 a2=3 a3=0 items=0 ppid=1 pid=4105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:22.985000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:23.001946 kernel: audit: type=1327 audit(1752533062.985:459): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:23.002093 kernel: audit: type=1105 audit(1752533063.000:460): pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.000000 audit[4105]: USER_START pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.001000 audit[4108]: CRED_ACQ pid=4108 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.025287 kernel: audit: type=1103 audit(1752533063.001:461): pid=4108 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.123156 sshd[4105]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:23.123000 audit[4105]: USER_END pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.125312 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:34054.service: Deactivated successfully. Jul 14 22:44:23.126359 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:44:23.126853 systemd-logind[1309]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:44:23.127617 systemd-logind[1309]: Removed session 11. Jul 14 22:44:23.123000 audit[4105]: CRED_DISP pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.133793 kernel: audit: type=1106 audit(1752533063.123:462): pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.133842 kernel: audit: type=1104 audit(1752533063.123:463): pid=4105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:23.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.12:22-10.0.0.1:34054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:23.281123 systemd-networkd[1105]: vxlan.calico: Gained IPv6LL Jul 14 22:44:23.314442 systemd-networkd[1105]: calib1613c8eb5c: Link UP Jul 14 22:44:23.317203 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:44:23.317261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib1613c8eb5c: link becomes ready Jul 14 22:44:23.317198 systemd-networkd[1105]: calib1613c8eb5c: Gained carrier Jul 14 22:44:23.345272 systemd-networkd[1105]: caliad98cf9e739: Gained IPv6LL Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.201 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0 coredns-7c65d6cfc9- kube-system 573e0651-d8b7-4359-8549-45a022613024 1032 0 2025-07-14 22:43:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-scf5h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1613c8eb5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.201 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.251 [INFO][4137] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" HandleID="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.251 [INFO][4137] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" HandleID="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-scf5h", "timestamp":"2025-07-14 22:44:23.251396602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.251 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.251 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.251 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.259 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.263 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.266 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.268 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.270 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.270 [INFO][4137] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.271 [INFO][4137] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.295 [INFO][4137] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.310 [INFO][4137] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.310 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" host="localhost" Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.310 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:23.347298 env[1320]: 2025-07-14 22:44:23.310 [INFO][4137] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" HandleID="k8s-pod-network.1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.312 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"573e0651-d8b7-4359-8549-45a022613024", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-scf5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1613c8eb5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.312 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.312 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1613c8eb5c ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.317 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.317 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"573e0651-d8b7-4359-8549-45a022613024", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e", Pod:"coredns-7c65d6cfc9-scf5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1613c8eb5c", MAC:"da:6e:46:ad:aa:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:23.347882 env[1320]: 2025-07-14 22:44:23.340 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-scf5h" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:44:23.353000 audit[4156]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:23.353000 audit[4156]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7fff69c92ae0 a2=0 a3=7fff69c92acc items=0 ppid=3781 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:23.353000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:23.384430 env[1320]: time="2025-07-14T22:44:23.384268077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:23.384430 env[1320]: time="2025-07-14T22:44:23.384323422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:23.384430 env[1320]: time="2025-07-14T22:44:23.384333341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:23.384926 env[1320]: time="2025-07-14T22:44:23.384591839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e pid=4179 runtime=io.containerd.runc.v2 Jul 14 22:44:23.406977 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:23.430720 env[1320]: time="2025-07-14T22:44:23.430665082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-scf5h,Uid:573e0651-d8b7-4359-8549-45a022613024,Namespace:kube-system,Attempt:1,} returns sandbox id \"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e\"" Jul 14 22:44:23.431704 kubelet[2215]: E0714 22:44:23.431516 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:23.436128 env[1320]: time="2025-07-14T22:44:23.436099448Z" level=info msg="CreateContainer within sandbox \"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:44:23.634002 systemd-networkd[1105]: cali7ac7f929035: Link UP Jul 14 22:44:23.650720 env[1320]: time="2025-07-14T22:44:23.650683277Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:44:23.653393 env[1320]: time="2025-07-14T22:44:23.653339661Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:44:23.660100 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7ac7f929035: link becomes ready Jul 14 22:44:23.660472 systemd-networkd[1105]: cali7ac7f929035: Gained carrier Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.382 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0 calico-apiserver-5f66f5ffdc- calico-apiserver 28f9fef2-ff3a-4233-92f1-c94976e9b138 1033 0 2025-07-14 22:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f66f5ffdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f66f5ffdc-7hdt4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ac7f929035 [] [] }} ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.382 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.446 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" HandleID="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.446 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" HandleID="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f66f5ffdc-7hdt4", "timestamp":"2025-07-14 22:44:23.446675077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.446 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.447 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.447 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.456 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.477 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.481 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.483 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.484 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.485 [INFO][4213] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.486 [INFO][4213] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.510 [INFO][4213] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.630 [INFO][4213] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.630 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" host="localhost" Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.630 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:23.940554 env[1320]: 2025-07-14 22:44:23.630 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" HandleID="k8s-pod-network.3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.632 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"28f9fef2-ff3a-4233-92f1-c94976e9b138", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f66f5ffdc-7hdt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ac7f929035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.632 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.632 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ac7f929035 ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.661 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.661 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"28f9fef2-ff3a-4233-92f1-c94976e9b138", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a", Pod:"calico-apiserver-5f66f5ffdc-7hdt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ac7f929035", MAC:"b2:87:93:b2:cb:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:23.941394 env[1320]: 2025-07-14 22:44:23.938 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-7hdt4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:44:23.950000 audit[4272]: NETFILTER_CFG table=filter:107 family=2 entries=54 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:23.950000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffee58eec50 a2=0 a3=7ffee58eec3c items=0 ppid=3781 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:23.950000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:23.994236 env[1320]: time="2025-07-14T22:44:23.994149283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:23.994236 env[1320]: time="2025-07-14T22:44:23.994188107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:23.994236 env[1320]: time="2025-07-14T22:44:23.994200550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:23.994462 env[1320]: time="2025-07-14T22:44:23.994319444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a pid=4279 runtime=io.containerd.runc.v2 Jul 14 22:44:24.016512 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:24.043385 env[1320]: time="2025-07-14T22:44:24.043330827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-7hdt4,Uid:28f9fef2-ff3a-4233-92f1-c94976e9b138,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a\"" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" iface="eth0" netns="/var/run/netns/cni-1db3f384-b3d0-7be1-6702-5edd06d97c35" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" iface="eth0" netns="/var/run/netns/cni-1db3f384-b3d0-7be1-6702-5edd06d97c35" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" iface="eth0" netns="/var/run/netns/cni-1db3f384-b3d0-7be1-6702-5edd06d97c35" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:23.993 [INFO][4252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.019 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.019 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.019 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.092 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.092 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.094 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:24.097713 env[1320]: 2025-07-14 22:44:24.095 [INFO][4252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:44:24.101574 systemd[1]: run-netns-cni\x2d1db3f384\x2db3d0\x2d7be1\x2d6702\x2d5edd06d97c35.mount: Deactivated successfully. Jul 14 22:44:24.102929 env[1320]: time="2025-07-14T22:44:24.102877481Z" level=info msg="TearDown network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" successfully" Jul 14 22:44:24.103025 env[1320]: time="2025-07-14T22:44:24.102929789Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" returns successfully" Jul 14 22:44:24.103654 env[1320]: time="2025-07-14T22:44:24.103619101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx29x,Uid:26698740-5794-455a-b832-1e56047f0f19,Namespace:calico-system,Attempt:1,}" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.093 [INFO][4251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.093 [INFO][4251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" iface="eth0" netns="/var/run/netns/cni-de6f9c27-6f50-8e0f-73e1-551dff11b2d5" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.094 [INFO][4251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" iface="eth0" netns="/var/run/netns/cni-de6f9c27-6f50-8e0f-73e1-551dff11b2d5" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.094 [INFO][4251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" iface="eth0" netns="/var/run/netns/cni-de6f9c27-6f50-8e0f-73e1-551dff11b2d5" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.094 [INFO][4251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.094 [INFO][4251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.113 [INFO][4323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.113 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.114 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.120 [WARNING][4323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.120 [INFO][4323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.121 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:24.125815 env[1320]: 2025-07-14 22:44:24.123 [INFO][4251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:44:24.126384 env[1320]: time="2025-07-14T22:44:24.125985920Z" level=info msg="TearDown network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" successfully" Jul 14 22:44:24.126384 env[1320]: time="2025-07-14T22:44:24.126020566Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" returns successfully" Jul 14 22:44:24.126672 env[1320]: time="2025-07-14T22:44:24.126636789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87ddffd96-qc6h6,Uid:cb90fe31-1c87-48f5-81dd-a9f3638c4eaf,Namespace:calico-system,Attempt:1,}" Jul 14 22:44:24.128697 systemd[1]: run-netns-cni\x2dde6f9c27\x2d6f50\x2d8e0f\x2d73e1\x2d551dff11b2d5.mount: Deactivated successfully. Jul 14 22:44:24.170607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048529187.mount: Deactivated successfully. Jul 14 22:44:24.174404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773620292.mount: Deactivated successfully. Jul 14 22:44:24.398371 env[1320]: time="2025-07-14T22:44:24.398305508Z" level=info msg="CreateContainer within sandbox \"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8a2e8325a18a5618e56b3892caa1cf0347494d83c4f09635ee8e2df08803d26\"" Jul 14 22:44:24.398904 env[1320]: time="2025-07-14T22:44:24.398883650Z" level=info msg="StartContainer for \"c8a2e8325a18a5618e56b3892caa1cf0347494d83c4f09635ee8e2df08803d26\"" Jul 14 22:44:24.620202 env[1320]: time="2025-07-14T22:44:24.620142439Z" level=info msg="StartContainer for \"c8a2e8325a18a5618e56b3892caa1cf0347494d83c4f09635ee8e2df08803d26\" returns successfully" Jul 14 22:44:24.945485 systemd-networkd[1105]: calib1613c8eb5c: Gained IPv6LL Jul 14 22:44:25.064522 systemd-networkd[1105]: calie1870ab4050: Link UP Jul 14 22:44:25.067280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:44:25.067339 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie1870ab4050: link becomes ready Jul 14 22:44:25.067668 systemd-networkd[1105]: calie1870ab4050: Gained carrier Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:24.970 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lx29x-eth0 csi-node-driver- calico-system 26698740-5794-455a-b832-1e56047f0f19 1049 0 2025-07-14 22:43:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lx29x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie1870ab4050 [] [] }} ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:24.970 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.010 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" HandleID="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.010 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" HandleID="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lx29x", "timestamp":"2025-07-14 22:44:25.010563862 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.010 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.010 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.010 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.016 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.020 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.024 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.025 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.027 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.027 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.028 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930 Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.037 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.060 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.060 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" host="localhost" Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.060 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:25.141866 env[1320]: 2025-07-14 22:44:25.060 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" HandleID="k8s-pod-network.85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.062 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lx29x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26698740-5794-455a-b832-1e56047f0f19", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lx29x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1870ab4050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.062 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.062 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1870ab4050 ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.067 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.068 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lx29x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26698740-5794-455a-b832-1e56047f0f19", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930", Pod:"csi-node-driver-lx29x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1870ab4050", MAC:"5e:22:12:e4:63:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:25.143314 env[1320]: 2025-07-14 22:44:25.140 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930" Namespace="calico-system" Pod="csi-node-driver-lx29x" WorkloadEndpoint="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:44:25.155000 audit[4416]: NETFILTER_CFG table=filter:108 family=2 entries=44 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:25.155000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff8f2bac50 a2=0 a3=7fff8f2bac3c items=0 ppid=3781 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:25.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:25.179026 env[1320]: time="2025-07-14T22:44:25.178943069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:25.179286 env[1320]: time="2025-07-14T22:44:25.178996950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:25.179286 env[1320]: time="2025-07-14T22:44:25.179008902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:25.179506 env[1320]: time="2025-07-14T22:44:25.179258634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930 pid=4426 runtime=io.containerd.runc.v2 Jul 14 22:44:25.202086 systemd-networkd[1105]: cali7ac7f929035: Gained IPv6LL Jul 14 22:44:25.205031 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:25.217745 env[1320]: time="2025-07-14T22:44:25.217683703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx29x,Uid:26698740-5794-455a-b832-1e56047f0f19,Namespace:calico-system,Attempt:1,} returns sandbox id \"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930\"" Jul 14 22:44:25.229868 kubelet[2215]: E0714 22:44:25.229823 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:25.293399 systemd-networkd[1105]: calif7c82abb245: Link UP Jul 14 22:44:25.295039 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif7c82abb245: link becomes ready Jul 14 22:44:25.294759 systemd-networkd[1105]: calif7c82abb245: Gained carrier Jul 14 22:44:25.351529 kubelet[2215]: I0714 22:44:25.350861 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-scf5h" podStartSLOduration=72.350837338 podStartE2EDuration="1m12.350837338s" podCreationTimestamp="2025-07-14 22:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:44:25.335422469 +0000 UTC m=+77.776764612" watchObservedRunningTime="2025-07-14 22:44:25.350837338 +0000 UTC m=+77.792179511" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.143 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0 calico-kube-controllers-87ddffd96- calico-system cb90fe31-1c87-48f5-81dd-a9f3638c4eaf 1050 0 2025-07-14 22:43:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:87ddffd96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-87ddffd96-qc6h6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif7c82abb245 [] [] }} ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.145 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.168 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" HandleID="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.168 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" HandleID="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004950e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-87ddffd96-qc6h6", "timestamp":"2025-07-14 22:44:25.168333227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.168 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.168 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.168 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.174 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.180 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.185 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.187 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.192 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.192 [INFO][4411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.197 [INFO][4411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831 Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.210 [INFO][4411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.288 [INFO][4411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.288 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" host="localhost" Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.288 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:25.354426 env[1320]: 2025-07-14 22:44:25.288 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" HandleID="k8s-pod-network.0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.291 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0", GenerateName:"calico-kube-controllers-87ddffd96-", Namespace:"calico-system", SelfLink:"", UID:"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87ddffd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-87ddffd96-qc6h6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7c82abb245", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.291 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.291 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c82abb245 ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.295 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.295 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0", GenerateName:"calico-kube-controllers-87ddffd96-", Namespace:"calico-system", SelfLink:"", UID:"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87ddffd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831", Pod:"calico-kube-controllers-87ddffd96-qc6h6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7c82abb245", MAC:"3e:db:84:0a:86:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:25.355417 env[1320]: 2025-07-14 22:44:25.351 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831" Namespace="calico-system" Pod="calico-kube-controllers-87ddffd96-qc6h6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:44:25.363000 audit[4470]: NETFILTER_CFG table=filter:109 family=2 entries=48 op=nft_register_chain pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:25.363000 audit[4470]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffcd64d26c0 a2=0 a3=7ffcd64d26ac items=0 ppid=3781 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:25.363000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:25.373000 audit[4471]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:25.373000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffebe6e8400 a2=0 a3=7ffebe6e83ec items=0 ppid=2323 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:25.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:25.377000 audit[4471]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:25.377000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffebe6e8400 a2=0 a3=0 items=0 ppid=2323 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:25.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:25.389697 env[1320]: time="2025-07-14T22:44:25.389615764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:25.389864 env[1320]: time="2025-07-14T22:44:25.389679935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:25.390019 env[1320]: time="2025-07-14T22:44:25.389947891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:25.390313 env[1320]: time="2025-07-14T22:44:25.390260280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831 pid=4478 runtime=io.containerd.runc.v2 Jul 14 22:44:25.414706 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:25.438399 env[1320]: time="2025-07-14T22:44:25.438358663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87ddffd96-qc6h6,Uid:cb90fe31-1c87-48f5-81dd-a9f3638c4eaf,Namespace:calico-system,Attempt:1,} returns sandbox id \"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831\"" Jul 14 22:44:25.598949 env[1320]: time="2025-07-14T22:44:25.598870985Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:25.744560 env[1320]: time="2025-07-14T22:44:25.744491775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:25.996281 env[1320]: time="2025-07-14T22:44:25.996155753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:26.016744 env[1320]: time="2025-07-14T22:44:26.016701691Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:26.017570 env[1320]: time="2025-07-14T22:44:26.017547707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:44:26.019787 env[1320]: time="2025-07-14T22:44:26.019752438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:44:26.020730 env[1320]: time="2025-07-14T22:44:26.020697842Z" level=info msg="CreateContainer within sandbox \"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:44:26.161118 systemd-networkd[1105]: calie1870ab4050: Gained IPv6LL Jul 14 22:44:26.234902 kubelet[2215]: E0714 22:44:26.234823 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:26.520429 env[1320]: time="2025-07-14T22:44:26.520364598Z" level=info msg="CreateContainer within sandbox \"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a77530923514d243d71fc4b156b4bf8c6e29a279c4865c7871ab61cc9dacc3bf\"" Jul 14 22:44:26.520934 env[1320]: time="2025-07-14T22:44:26.520913484Z" level=info msg="StartContainer for \"a77530923514d243d71fc4b156b4bf8c6e29a279c4865c7871ab61cc9dacc3bf\"" Jul 14 22:44:26.548000 audit[4540]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:26.548000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe51b52aa0 a2=0 a3=7ffe51b52a8c items=0 ppid=2323 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:26.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:26.554000 audit[4540]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:26.554000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe51b52aa0 a2=0 a3=7ffe51b52a8c items=0 ppid=2323 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:26.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:26.610170 systemd-networkd[1105]: calif7c82abb245: Gained IPv6LL Jul 14 22:44:26.650270 env[1320]: time="2025-07-14T22:44:26.650217859Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:44:26.650485 env[1320]: time="2025-07-14T22:44:26.650334519Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:44:26.870217 env[1320]: time="2025-07-14T22:44:26.870141646Z" level=info msg="StartContainer for \"a77530923514d243d71fc4b156b4bf8c6e29a279c4865c7871ab61cc9dacc3bf\" returns successfully" Jul 14 22:44:27.237879 kubelet[2215]: E0714 22:44:27.237743 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.228 [INFO][4581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.229 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" iface="eth0" netns="/var/run/netns/cni-7ce5c134-fbac-10aa-48bf-a5407a0dea31" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.229 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" iface="eth0" netns="/var/run/netns/cni-7ce5c134-fbac-10aa-48bf-a5407a0dea31" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.229 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" iface="eth0" netns="/var/run/netns/cni-7ce5c134-fbac-10aa-48bf-a5407a0dea31" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.229 [INFO][4581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.229 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.248 [INFO][4600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.249 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.249 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.401 [WARNING][4600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.401 [INFO][4600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.422 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:27.427167 env[1320]: 2025-07-14 22:44:27.425 [INFO][4581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:44:27.427682 env[1320]: time="2025-07-14T22:44:27.427309271Z" level=info msg="TearDown network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" successfully" Jul 14 22:44:27.427682 env[1320]: time="2025-07-14T22:44:27.427337514Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" returns successfully" Jul 14 22:44:27.427936 env[1320]: time="2025-07-14T22:44:27.427908963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q9tk2,Uid:63689819-3628-4d96-bf6f-7f8f144f2164,Namespace:calico-system,Attempt:1,}" Jul 14 22:44:27.430671 systemd[1]: run-netns-cni\x2d7ce5c134\x2dfbac\x2d10aa\x2d48bf\x2da5407a0dea31.mount: Deactivated successfully. Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.229 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.229 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" iface="eth0" netns="/var/run/netns/cni-2acb2e00-2146-7609-3fb9-0575ce1529af" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.229 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" iface="eth0" netns="/var/run/netns/cni-2acb2e00-2146-7609-3fb9-0575ce1529af" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.229 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" iface="eth0" netns="/var/run/netns/cni-2acb2e00-2146-7609-3fb9-0575ce1529af" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.230 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.230 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.250 [INFO][4602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.250 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.422 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.465 [WARNING][4602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.465 [INFO][4602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.467 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:27.470092 env[1320]: 2025-07-14 22:44:27.468 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:44:27.470652 env[1320]: time="2025-07-14T22:44:27.470618978Z" level=info msg="TearDown network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" successfully" Jul 14 22:44:27.470652 env[1320]: time="2025-07-14T22:44:27.470650097Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" returns successfully" Jul 14 22:44:27.471261 env[1320]: time="2025-07-14T22:44:27.471219101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-799wc,Uid:e904006e-54c2-458a-afd4-0856ab783ed3,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:44:27.473142 systemd[1]: run-netns-cni\x2d2acb2e00\x2d2146\x2d7609\x2d3fb9\x2d0575ce1529af.mount: Deactivated successfully. Jul 14 22:44:27.649883 kubelet[2215]: E0714 22:44:27.649852 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:28.126501 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:49658.service. Jul 14 22:44:28.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.12:22-10.0.0.1:49658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:28.158463 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 14 22:44:28.158529 kernel: audit: type=1130 audit(1752533068.125:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.12:22-10.0.0.1:49658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:28.196000 audit[4617]: USER_ACCT pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.198040 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 49658 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:28.200429 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:28.199000 audit[4617]: CRED_ACQ pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.204091 systemd-logind[1309]: New session 12 of user core. Jul 14 22:44:28.205039 kernel: audit: type=1101 audit(1752533068.196:474): pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.205089 kernel: audit: type=1103 audit(1752533068.199:475): pid=4617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.205028 systemd[1]: Started session-12.scope. Jul 14 22:44:28.199000 audit[4617]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff759656d0 a2=3 a3=0 items=0 ppid=1 pid=4617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:28.213832 kernel: audit: type=1006 audit(1752533068.199:476): pid=4617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 14 22:44:28.213894 kernel: audit: type=1300 audit(1752533068.199:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff759656d0 a2=3 a3=0 items=0 ppid=1 pid=4617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:28.213920 kernel: audit: type=1327 audit(1752533068.199:476): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:28.199000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:28.209000 audit[4617]: USER_START pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.239463 kubelet[2215]: E0714 22:44:28.239444 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:28.239765 kernel: audit: type=1105 audit(1752533068.209:477): pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.210000 audit[4620]: CRED_ACQ pid=4620 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.243341 kernel: audit: type=1103 audit(1752533068.210:478): pid=4620 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.421104 sshd[4617]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:28.421000 audit[4617]: USER_END pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.423639 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:49658.service: Deactivated successfully. Jul 14 22:44:28.424509 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:44:28.425690 systemd-logind[1309]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:44:28.421000 audit[4617]: CRED_DISP pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.426811 systemd-logind[1309]: Removed session 12. Jul 14 22:44:28.429681 kernel: audit: type=1106 audit(1752533068.421:479): pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.429756 kernel: audit: type=1104 audit(1752533068.421:480): pid=4617 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:28.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.12:22-10.0.0.1:49658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:28.649456 env[1320]: time="2025-07-14T22:44:28.649401846Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.883 [INFO][4644] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.883 [INFO][4644] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" iface="eth0" netns="/var/run/netns/cni-6955fc0b-dfa1-809c-d9b1-f9cd9d7a1606" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.884 [INFO][4644] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" iface="eth0" netns="/var/run/netns/cni-6955fc0b-dfa1-809c-d9b1-f9cd9d7a1606" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.884 [INFO][4644] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" iface="eth0" netns="/var/run/netns/cni-6955fc0b-dfa1-809c-d9b1-f9cd9d7a1606" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.884 [INFO][4644] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.884 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.901 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.901 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.901 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.919 [WARNING][4652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.919 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.957 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:28.960383 env[1320]: 2025-07-14 22:44:28.958 [INFO][4644] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:44:28.960861 env[1320]: time="2025-07-14T22:44:28.960538742Z" level=info msg="TearDown network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" successfully" Jul 14 22:44:28.960861 env[1320]: time="2025-07-14T22:44:28.960577276Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" returns successfully" Jul 14 22:44:28.960909 kubelet[2215]: E0714 22:44:28.960887 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:28.961385 env[1320]: time="2025-07-14T22:44:28.961300791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j6xgm,Uid:44d35327-7f5c-4584-8b0a-dbf8a90adea6,Namespace:kube-system,Attempt:1,}" Jul 14 22:44:28.963766 systemd[1]: run-netns-cni\x2d6955fc0b\x2ddfa1\x2d809c\x2dd9b1\x2df9cd9d7a1606.mount: Deactivated successfully. Jul 14 22:44:29.476750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:44:29.476855 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic558ab75054: link becomes ready Jul 14 22:44:29.474630 systemd-networkd[1105]: calic558ab75054: Link UP Jul 14 22:44:29.477584 systemd-networkd[1105]: calic558ab75054: Gained carrier Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.276 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0 goldmane-58fd7646b9- calico-system 63689819-3628-4d96-bf6f-7f8f144f2164 1088 0 2025-07-14 22:43:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-q9tk2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic558ab75054 [] [] }} ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.276 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.296 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" HandleID="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.297 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" HandleID="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ab20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-q9tk2", "timestamp":"2025-07-14 22:44:29.296722984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.297 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.297 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.297 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.305 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.308 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.311 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.313 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.314 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.314 [INFO][4676] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.316 [INFO][4676] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.342 [INFO][4676] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.471 [INFO][4676] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.471 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" host="localhost" Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.471 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:29.520357 env[1320]: 2025-07-14 22:44:29.471 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" HandleID="k8s-pod-network.09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.473 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"63689819-3628-4d96-bf6f-7f8f144f2164", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-q9tk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic558ab75054", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.473 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.473 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic558ab75054 ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.476 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.477 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"63689819-3628-4d96-bf6f-7f8f144f2164", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a", Pod:"goldmane-58fd7646b9-q9tk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic558ab75054", MAC:"be:0c:95:53:75:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:29.520976 env[1320]: 2025-07-14 22:44:29.516 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a" Namespace="calico-system" Pod="goldmane-58fd7646b9-q9tk2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:44:29.528000 audit[4709]: NETFILTER_CFG table=filter:114 family=2 entries=60 op=nft_register_chain pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:29.528000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=29932 a0=3 a1=7ffe6d635470 a2=0 a3=7ffe6d63545c items=0 ppid=3781 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:29.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:29.586790 env[1320]: time="2025-07-14T22:44:29.586715968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:29.586790 env[1320]: time="2025-07-14T22:44:29.586755522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:29.586790 env[1320]: time="2025-07-14T22:44:29.586766423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:29.587245 env[1320]: time="2025-07-14T22:44:29.587175224Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a pid=4727 runtime=io.containerd.runc.v2 Jul 14 22:44:29.607176 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:29.629779 env[1320]: time="2025-07-14T22:44:29.629739685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q9tk2,Uid:63689819-3628-4d96-bf6f-7f8f144f2164,Namespace:calico-system,Attempt:1,} returns sandbox id \"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a\"" Jul 14 22:44:29.783473 systemd-networkd[1105]: calic84055f810a: Link UP Jul 14 22:44:29.785576 systemd-networkd[1105]: calic84055f810a: Gained carrier Jul 14 22:44:29.785980 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic84055f810a: link becomes ready Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.539 [INFO][4687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0 calico-apiserver-5f66f5ffdc- calico-apiserver e904006e-54c2-458a-afd4-0856ab783ed3 1089 0 2025-07-14 22:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f66f5ffdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f66f5ffdc-799wc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic84055f810a [] [] }} ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.539 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.563 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" HandleID="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.563 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" HandleID="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004922f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f66f5ffdc-799wc", "timestamp":"2025-07-14 22:44:29.563445016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.563 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.563 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.563 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.569 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.724 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.728 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.729 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.731 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.731 [INFO][4712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.732 [INFO][4712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.746 [INFO][4712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.777 [INFO][4712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.777 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" host="localhost" Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.777 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:29.893513 env[1320]: 2025-07-14 22:44:29.777 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" HandleID="k8s-pod-network.182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.779 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e904006e-54c2-458a-afd4-0856ab783ed3", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f66f5ffdc-799wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84055f810a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.780 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.780 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic84055f810a ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.785 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.786 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e904006e-54c2-458a-afd4-0856ab783ed3", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a", Pod:"calico-apiserver-5f66f5ffdc-799wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84055f810a", MAC:"76:e4:95:42:6a:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:29.894325 env[1320]: 2025-07-14 22:44:29.891 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a" Namespace="calico-apiserver" Pod="calico-apiserver-5f66f5ffdc-799wc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:44:29.904000 audit[4770]: NETFILTER_CFG table=filter:115 family=2 entries=63 op=nft_register_chain pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:29.904000 audit[4770]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffd69dd6840 a2=0 a3=7ffd69dd682c items=0 ppid=3781 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:29.904000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:30.050106 env[1320]: time="2025-07-14T22:44:30.050030650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:30.050277 env[1320]: time="2025-07-14T22:44:30.050120730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:30.050277 env[1320]: time="2025-07-14T22:44:30.050153451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:30.050375 env[1320]: time="2025-07-14T22:44:30.050340264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a pid=4800 runtime=io.containerd.runc.v2 Jul 14 22:44:30.078472 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:30.081029 systemd-networkd[1105]: calid2262a24860: Link UP Jul 14 22:44:30.082518 systemd-networkd[1105]: calid2262a24860: Gained carrier Jul 14 22:44:30.083437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid2262a24860: link becomes ready Jul 14 22:44:30.103872 env[1320]: time="2025-07-14T22:44:30.103828277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f66f5ffdc-799wc,Uid:e904006e-54c2-458a-afd4-0856ab783ed3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a\"" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.951 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0 coredns-7c65d6cfc9- kube-system 44d35327-7f5c-4584-8b0a-dbf8a90adea6 1104 0 2025-07-14 22:43:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-j6xgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2262a24860 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.951 [INFO][4771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.975 [INFO][4785] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" HandleID="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.975 [INFO][4785] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" HandleID="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-j6xgm", "timestamp":"2025-07-14 22:44:29.975054177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.975 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.975 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.975 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.981 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.985 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:29.989 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.038 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.041 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.041 [INFO][4785] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.042 [INFO][4785] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.053 [INFO][4785] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.076 [INFO][4785] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.076 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" host="localhost" Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.076 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:44:30.204702 env[1320]: 2025-07-14 22:44:30.076 [INFO][4785] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" HandleID="k8s-pod-network.71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.078 [INFO][4771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"44d35327-7f5c-4584-8b0a-dbf8a90adea6", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-j6xgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2262a24860", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.079 [INFO][4771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.079 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2262a24860 ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.082 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.083 [INFO][4771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"44d35327-7f5c-4584-8b0a-dbf8a90adea6", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e", Pod:"coredns-7c65d6cfc9-j6xgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2262a24860", MAC:"36:58:b1:84:01:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:44:30.205370 env[1320]: 2025-07-14 22:44:30.202 [INFO][4771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j6xgm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:44:30.214000 audit[4842]: NETFILTER_CFG table=filter:116 family=2 entries=52 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 14 22:44:30.214000 audit[4842]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffe32295af0 a2=0 a3=7ffe32295adc items=0 ppid=3781 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:30.214000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 14 22:44:30.286701 env[1320]: time="2025-07-14T22:44:30.286642631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:30.286701 env[1320]: time="2025-07-14T22:44:30.286678718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:30.286701 env[1320]: time="2025-07-14T22:44:30.286689951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:30.286895 env[1320]: time="2025-07-14T22:44:30.286856785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e pid=4850 runtime=io.containerd.runc.v2 Jul 14 22:44:30.306367 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:44:30.329519 env[1320]: time="2025-07-14T22:44:30.328754804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j6xgm,Uid:44d35327-7f5c-4584-8b0a-dbf8a90adea6,Namespace:kube-system,Attempt:1,} returns sandbox id \"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e\"" Jul 14 22:44:30.329669 kubelet[2215]: E0714 22:44:30.329283 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:30.331011 env[1320]: time="2025-07-14T22:44:30.330942601Z" level=info msg="CreateContainer within sandbox \"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:44:30.578883 env[1320]: time="2025-07-14T22:44:30.578723352Z" level=info msg="CreateContainer within sandbox \"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b1f8993592db90c0ce0528a0edc8f302736ccdf54586140de6c23fd2f6aae06\"" Jul 14 22:44:30.579787 env[1320]: time="2025-07-14T22:44:30.579728838Z" level=info msg="StartContainer for \"8b1f8993592db90c0ce0528a0edc8f302736ccdf54586140de6c23fd2f6aae06\"" Jul 14 22:44:30.621758 env[1320]: time="2025-07-14T22:44:30.621711428Z" level=info msg="StartContainer for \"8b1f8993592db90c0ce0528a0edc8f302736ccdf54586140de6c23fd2f6aae06\" returns successfully" Jul 14 22:44:31.025122 systemd-networkd[1105]: calic558ab75054: Gained IPv6LL Jul 14 22:44:31.248744 kubelet[2215]: E0714 22:44:31.248638 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:31.387545 kubelet[2215]: I0714 22:44:31.387479 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j6xgm" podStartSLOduration=78.387456352 podStartE2EDuration="1m18.387456352s" podCreationTimestamp="2025-07-14 22:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:44:31.373683101 +0000 UTC m=+83.815025275" watchObservedRunningTime="2025-07-14 22:44:31.387456352 +0000 UTC m=+83.828798495" Jul 14 22:44:31.447000 audit[4921]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:31.447000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd09573640 a2=0 a3=7ffd0957362c items=0 ppid=2323 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:31.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:31.462000 audit[4921]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:31.462000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd09573640 a2=0 a3=7ffd0957362c items=0 ppid=2323 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:31.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:31.481000 audit[4924]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:31.481000 audit[4924]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff255a34b0 a2=0 a3=7fff255a349c items=0 ppid=2323 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:31.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:31.486000 audit[4924]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:31.486000 audit[4924]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff255a34b0 a2=0 a3=7fff255a349c items=0 ppid=2323 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:31.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:31.601108 systemd-networkd[1105]: calid2262a24860: Gained IPv6LL Jul 14 22:44:31.857536 systemd-networkd[1105]: calic84055f810a: Gained IPv6LL Jul 14 22:44:32.250594 kubelet[2215]: E0714 22:44:32.250488 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:32.489969 env[1320]: time="2025-07-14T22:44:32.489891382Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:32.492845 env[1320]: time="2025-07-14T22:44:32.492798075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:32.494187 env[1320]: time="2025-07-14T22:44:32.494155075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:32.495523 env[1320]: time="2025-07-14T22:44:32.495496955Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:32.496007 env[1320]: time="2025-07-14T22:44:32.495981239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:44:32.497013 env[1320]: time="2025-07-14T22:44:32.496988419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:44:32.497753 env[1320]: time="2025-07-14T22:44:32.497722603Z" level=info msg="CreateContainer within sandbox \"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:44:32.510649 env[1320]: time="2025-07-14T22:44:32.510554258Z" level=info msg="CreateContainer within sandbox \"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc345c24b4cbdbd3c9374b50c84de9b8f6a48c4d45a87dd4018e3ec7032105fa\"" Jul 14 22:44:32.511182 env[1320]: time="2025-07-14T22:44:32.511160612Z" level=info msg="StartContainer for \"dc345c24b4cbdbd3c9374b50c84de9b8f6a48c4d45a87dd4018e3ec7032105fa\"" Jul 14 22:44:32.530818 systemd[1]: run-containerd-runc-k8s.io-dc345c24b4cbdbd3c9374b50c84de9b8f6a48c4d45a87dd4018e3ec7032105fa-runc.8htAdu.mount: Deactivated successfully. Jul 14 22:44:32.572998 env[1320]: time="2025-07-14T22:44:32.568272810Z" level=info msg="StartContainer for \"dc345c24b4cbdbd3c9374b50c84de9b8f6a48c4d45a87dd4018e3ec7032105fa\" returns successfully" Jul 14 22:44:33.254415 kubelet[2215]: E0714 22:44:33.254358 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:33.279000 audit[4972]: NETFILTER_CFG table=filter:121 family=2 entries=14 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:33.281700 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 14 22:44:33.281835 kernel: audit: type=1325 audit(1752533073.279:489): table=filter:121 family=2 entries=14 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:33.279000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9667b640 a2=0 a3=7ffd9667b62c items=0 ppid=2323 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:33.289061 kernel: audit: type=1300 audit(1752533073.279:489): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9667b640 a2=0 a3=7ffd9667b62c items=0 ppid=2323 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:33.289122 kernel: audit: type=1327 audit(1752533073.279:489): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:33.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:33.292000 audit[4972]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:33.292000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9667b640 a2=0 a3=7ffd9667b62c items=0 ppid=2323 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:33.301220 kernel: audit: type=1325 audit(1752533073.292:490): table=nat:122 family=2 entries=20 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:33.301260 kernel: audit: type=1300 audit(1752533073.292:490): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9667b640 a2=0 a3=7ffd9667b62c items=0 ppid=2323 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:33.301279 kernel: audit: type=1327 audit(1752533073.292:490): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:33.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:33.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.12:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:33.423981 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:49706.service. Jul 14 22:44:33.428985 kernel: audit: type=1130 audit(1752533073.423:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.12:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:33.463000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.468577 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 49706 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:33.467000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.468986 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:33.473062 kernel: audit: type=1101 audit(1752533073.463:492): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.473118 kernel: audit: type=1103 audit(1752533073.467:493): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.473145 kernel: audit: type=1006 audit(1752533073.467:494): pid=4973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 14 22:44:33.472756 systemd-logind[1309]: New session 13 of user core. Jul 14 22:44:33.473208 systemd[1]: Started session-13.scope. Jul 14 22:44:33.467000 audit[4973]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc80404080 a2=3 a3=0 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:33.467000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:33.480000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.482000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.730683 sshd[4973]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:33.730000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.730000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:33.733468 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:49706.service: Deactivated successfully. Jul 14 22:44:33.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.12:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:33.734662 systemd-logind[1309]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:44:33.734736 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:44:33.735817 systemd-logind[1309]: Removed session 13. Jul 14 22:44:34.410053 env[1320]: time="2025-07-14T22:44:34.409995721Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:34.416613 env[1320]: time="2025-07-14T22:44:34.416558940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:34.419050 env[1320]: time="2025-07-14T22:44:34.419012969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:34.421845 env[1320]: time="2025-07-14T22:44:34.421809654Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:34.422770 env[1320]: time="2025-07-14T22:44:34.422725861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:44:34.424579 env[1320]: time="2025-07-14T22:44:34.424537178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:44:34.425413 env[1320]: time="2025-07-14T22:44:34.425376059Z" level=info msg="CreateContainer within sandbox \"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:44:34.432632 kubelet[2215]: I0714 22:44:34.432553 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-7hdt4" podStartSLOduration=55.980295737 podStartE2EDuration="1m4.432530213s" podCreationTimestamp="2025-07-14 22:43:30 +0000 UTC" firstStartedPulling="2025-07-14 22:44:24.044527876 +0000 UTC m=+76.485870019" lastFinishedPulling="2025-07-14 22:44:32.496762352 +0000 UTC m=+84.938104495" observedRunningTime="2025-07-14 22:44:33.267711945 +0000 UTC m=+85.709054108" watchObservedRunningTime="2025-07-14 22:44:34.432530213 +0000 UTC m=+86.873872386" Jul 14 22:44:34.448127 env[1320]: time="2025-07-14T22:44:34.448080132Z" level=info msg="CreateContainer within sandbox \"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"583e5c0ca80def82fe4aa67f5d93fa959df0d627939de850b396de2602f290cb\"" Jul 14 22:44:34.448717 env[1320]: time="2025-07-14T22:44:34.448693659Z" level=info msg="StartContainer for \"583e5c0ca80def82fe4aa67f5d93fa959df0d627939de850b396de2602f290cb\"" Jul 14 22:44:34.466000 audit[5002]: NETFILTER_CFG table=filter:123 family=2 entries=13 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:34.466000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff5d7190c0 a2=0 a3=7fff5d7190ac items=0 ppid=2323 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:34.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:34.477000 audit[5002]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:34.477000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff5d7190c0 a2=0 a3=7fff5d7190ac items=0 ppid=2323 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:34.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:34.483058 systemd[1]: run-containerd-runc-k8s.io-583e5c0ca80def82fe4aa67f5d93fa959df0d627939de850b396de2602f290cb-runc.pWJdCg.mount: Deactivated successfully. Jul 14 22:44:34.530699 env[1320]: time="2025-07-14T22:44:34.530635768Z" level=info msg="StartContainer for \"583e5c0ca80def82fe4aa67f5d93fa959df0d627939de850b396de2602f290cb\" returns successfully" Jul 14 22:44:38.383105 env[1320]: time="2025-07-14T22:44:38.383043604Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:38.426187 env[1320]: time="2025-07-14T22:44:38.426114715Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:38.451632 env[1320]: time="2025-07-14T22:44:38.451585626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:38.466889 env[1320]: time="2025-07-14T22:44:38.466780900Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:38.467605 env[1320]: time="2025-07-14T22:44:38.467559028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:44:38.469144 env[1320]: time="2025-07-14T22:44:38.469094474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:44:38.481289 env[1320]: time="2025-07-14T22:44:38.481250917Z" level=info msg="CreateContainer within sandbox \"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:44:38.544363 env[1320]: time="2025-07-14T22:44:38.544303398Z" level=info msg="CreateContainer within sandbox \"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7242b839ef782047bb1796cf6a5ff74347a0a7eb8e56b415c2498daaa344bf13\"" Jul 14 22:44:38.544858 env[1320]: time="2025-07-14T22:44:38.544831123Z" level=info msg="StartContainer for \"7242b839ef782047bb1796cf6a5ff74347a0a7eb8e56b415c2498daaa344bf13\"" Jul 14 22:44:38.609072 env[1320]: time="2025-07-14T22:44:38.609026889Z" level=info msg="StartContainer for \"7242b839ef782047bb1796cf6a5ff74347a0a7eb8e56b415c2498daaa344bf13\" returns successfully" Jul 14 22:44:38.649051 kubelet[2215]: E0714 22:44:38.648927 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:38.733626 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:45036.service. Jul 14 22:44:38.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.12:22-10.0.0.1:45036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:38.743004 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 14 22:44:38.743167 kernel: audit: type=1130 audit(1752533078.732:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.12:22-10.0.0.1:45036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:38.781000 audit[5073]: USER_ACCT pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.782371 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 45036 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:38.783705 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:38.782000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.788439 systemd-logind[1309]: New session 14 of user core. Jul 14 22:44:38.789161 systemd[1]: Started session-14.scope. Jul 14 22:44:38.789341 kernel: audit: type=1101 audit(1752533078.781:503): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.789389 kernel: audit: type=1103 audit(1752533078.782:504): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.789414 kernel: audit: type=1006 audit(1752533078.782:505): pid=5073 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 14 22:44:38.791574 kernel: audit: type=1300 audit(1752533078.782:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc858f81a0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:38.782000 audit[5073]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc858f81a0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:38.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:38.796761 kernel: audit: type=1327 audit(1752533078.782:505): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:38.796826 kernel: audit: type=1105 audit(1752533078.793:506): pid=5073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.793000 audit[5073]: USER_START pid=5073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.800691 kernel: audit: type=1103 audit(1752533078.794:507): pid=5076 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:38.794000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.286974 kubelet[2215]: I0714 22:44:39.286903 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-87ddffd96-qc6h6" podStartSLOduration=53.257762976 podStartE2EDuration="1m6.286886614s" podCreationTimestamp="2025-07-14 22:43:33 +0000 UTC" firstStartedPulling="2025-07-14 22:44:25.439660089 +0000 UTC m=+77.881002232" lastFinishedPulling="2025-07-14 22:44:38.468783727 +0000 UTC m=+90.910125870" observedRunningTime="2025-07-14 22:44:39.286554297 +0000 UTC m=+91.727896440" watchObservedRunningTime="2025-07-14 22:44:39.286886614 +0000 UTC m=+91.728228757" Jul 14 22:44:39.291037 sshd[5073]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:39.293567 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:45052.service. Jul 14 22:44:39.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.12:22-10.0.0.1:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:39.296000 audit[5073]: USER_END pid=5073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.299790 systemd-logind[1309]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:44:39.300198 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:45036.service: Deactivated successfully. Jul 14 22:44:39.300952 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:44:39.302995 kernel: audit: type=1130 audit(1752533079.292:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.12:22-10.0.0.1:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:39.303050 kernel: audit: type=1106 audit(1752533079.296:509): pid=5073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.296000 audit[5073]: CRED_DISP pid=5073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.12:22-10.0.0.1:45036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:39.304114 systemd-logind[1309]: Removed session 14. Jul 14 22:44:39.333000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.334852 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 45052 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:39.334000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.334000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef76fb320 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:39.334000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:39.336161 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:39.340335 systemd-logind[1309]: New session 15 of user core. Jul 14 22:44:39.341070 systemd[1]: Started session-15.scope. Jul 14 22:44:39.346000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:39.347000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.168875 sshd[5098]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:40.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.12:22-10.0.0.1:45054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:40.170829 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:45054.service. Jul 14 22:44:40.170000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.170000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.173226 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:45052.service: Deactivated successfully. Jul 14 22:44:40.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.12:22-10.0.0.1:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:40.174263 systemd-logind[1309]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:44:40.174310 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:44:40.175119 systemd-logind[1309]: Removed session 15. Jul 14 22:44:40.215000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.216777 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 45054 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:40.216000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.216000 audit[5140]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde07c30f0 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:40.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:40.217994 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:40.221515 systemd-logind[1309]: New session 16 of user core. Jul 14 22:44:40.222429 systemd[1]: Started session-16.scope. Jul 14 22:44:40.225000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.226000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.342072 sshd[5140]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:40.342000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.342000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:40.344076 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:45054.service: Deactivated successfully. Jul 14 22:44:40.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.12:22-10.0.0.1:45054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:40.345020 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:44:40.345948 systemd-logind[1309]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:44:40.346635 systemd-logind[1309]: Removed session 16. Jul 14 22:44:41.511249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644802824.mount: Deactivated successfully. Jul 14 22:44:41.566928 env[1320]: time="2025-07-14T22:44:41.566867220Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:41.572003 env[1320]: time="2025-07-14T22:44:41.571942880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:41.575023 env[1320]: time="2025-07-14T22:44:41.574931466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:41.577227 env[1320]: time="2025-07-14T22:44:41.577168643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:41.578084 env[1320]: time="2025-07-14T22:44:41.578034506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:44:41.579308 env[1320]: time="2025-07-14T22:44:41.579262351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:44:41.580659 env[1320]: time="2025-07-14T22:44:41.580622956Z" level=info msg="CreateContainer within sandbox \"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:44:41.597149 env[1320]: time="2025-07-14T22:44:41.597075909Z" level=info msg="CreateContainer within sandbox \"73fd9d648e9b60a36b10d2127e2d1598621237cd4b6d487ae1aac12f71bd6b6f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6dbe9cb53df3d996074e1c151f1209e62e7a14f21884b98ec11ba879c6fa97d8\"" Jul 14 22:44:41.597857 env[1320]: time="2025-07-14T22:44:41.597780086Z" level=info msg="StartContainer for \"6dbe9cb53df3d996074e1c151f1209e62e7a14f21884b98ec11ba879c6fa97d8\"" Jul 14 22:44:41.666352 env[1320]: time="2025-07-14T22:44:41.666278130Z" level=info msg="StartContainer for \"6dbe9cb53df3d996074e1c151f1209e62e7a14f21884b98ec11ba879c6fa97d8\" returns successfully" Jul 14 22:44:42.292402 kubelet[2215]: I0714 22:44:42.292300 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f6b6647b6-8j8t7" podStartSLOduration=2.6914329329999997 podStartE2EDuration="22.292281033s" podCreationTimestamp="2025-07-14 22:44:20 +0000 UTC" firstStartedPulling="2025-07-14 22:44:21.978235995 +0000 UTC m=+74.419578138" lastFinishedPulling="2025-07-14 22:44:41.579084085 +0000 UTC m=+94.020426238" observedRunningTime="2025-07-14 22:44:42.291859008 +0000 UTC m=+94.733201181" watchObservedRunningTime="2025-07-14 22:44:42.292281033 +0000 UTC m=+94.733623176" Jul 14 22:44:42.305000 audit[5203]: NETFILTER_CFG table=filter:125 family=2 entries=11 op=nft_register_rule pid=5203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:42.305000 audit[5203]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff37c6ec90 a2=0 a3=7fff37c6ec7c items=0 ppid=2323 pid=5203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:42.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:42.311000 audit[5203]: NETFILTER_CFG table=nat:126 family=2 entries=29 op=nft_register_chain pid=5203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:42.311000 audit[5203]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff37c6ec90 a2=0 a3=7fff37c6ec7c items=0 ppid=2323 pid=5203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:42.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:44.776553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875262134.mount: Deactivated successfully. Jul 14 22:44:45.344995 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:45068.service. Jul 14 22:44:45.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.12:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:45.350806 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 14 22:44:45.350876 kernel: audit: type=1130 audit(1752533085.344:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.12:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:45.384000 audit[5206]: USER_ACCT pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.385222 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 45068 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:45.388568 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:45.387000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.392241 systemd-logind[1309]: New session 17 of user core. Jul 14 22:44:45.393192 systemd[1]: Started session-17.scope. Jul 14 22:44:45.393319 kernel: audit: type=1101 audit(1752533085.384:532): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.393358 kernel: audit: type=1103 audit(1752533085.387:533): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.393383 kernel: audit: type=1006 audit(1752533085.387:534): pid=5206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 14 22:44:45.387000 audit[5206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff98c2d030 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:45.401092 kernel: audit: type=1300 audit(1752533085.387:534): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff98c2d030 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:45.401154 kernel: audit: type=1327 audit(1752533085.387:534): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:45.387000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:45.398000 audit[5206]: USER_START pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.406943 kernel: audit: type=1105 audit(1752533085.398:535): pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.407006 kernel: audit: type=1103 audit(1752533085.399:536): pid=5209 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.399000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.556787 sshd[5206]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:45.556000 audit[5206]: USER_END pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.558843 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:45068.service: Deactivated successfully. Jul 14 22:44:45.559573 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:44:45.560315 systemd-logind[1309]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:44:45.561235 systemd-logind[1309]: Removed session 17. Jul 14 22:44:45.556000 audit[5206]: CRED_DISP pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.565875 kernel: audit: type=1106 audit(1752533085.556:537): pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.570681 kernel: audit: type=1104 audit(1752533085.556:538): pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:45.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.12:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:46.566052 env[1320]: time="2025-07-14T22:44:46.566000096Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:46.567711 env[1320]: time="2025-07-14T22:44:46.567689180Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:46.569646 env[1320]: time="2025-07-14T22:44:46.569613608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:46.571306 env[1320]: time="2025-07-14T22:44:46.571258940Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:46.571841 env[1320]: time="2025-07-14T22:44:46.571807444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:44:46.573121 env[1320]: time="2025-07-14T22:44:46.573038535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:44:46.574056 env[1320]: time="2025-07-14T22:44:46.574023181Z" level=info msg="CreateContainer within sandbox \"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:44:46.586709 env[1320]: time="2025-07-14T22:44:46.586657007Z" level=info msg="CreateContainer within sandbox \"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4ad833d0cc721cbc4e1616bb2502aeadf33726dd82c18bc322b774ab6ef7566a\"" Jul 14 22:44:46.587283 env[1320]: time="2025-07-14T22:44:46.587249175Z" level=info msg="StartContainer for \"4ad833d0cc721cbc4e1616bb2502aeadf33726dd82c18bc322b774ab6ef7566a\"" Jul 14 22:44:46.649036 env[1320]: time="2025-07-14T22:44:46.648785000Z" level=info msg="StartContainer for \"4ad833d0cc721cbc4e1616bb2502aeadf33726dd82c18bc322b774ab6ef7566a\" returns successfully" Jul 14 22:44:47.048991 env[1320]: time="2025-07-14T22:44:47.048845790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:47.050919 env[1320]: time="2025-07-14T22:44:47.050893771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:47.052706 env[1320]: time="2025-07-14T22:44:47.052676592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:47.054247 env[1320]: time="2025-07-14T22:44:47.054225661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:47.054622 env[1320]: time="2025-07-14T22:44:47.054599306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:44:47.055736 env[1320]: time="2025-07-14T22:44:47.055532525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:44:47.056514 env[1320]: time="2025-07-14T22:44:47.056489630Z" level=info msg="CreateContainer within sandbox \"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:44:47.070578 env[1320]: time="2025-07-14T22:44:47.070527292Z" level=info msg="CreateContainer within sandbox \"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c37f581ada0c3c722c1fe7b0dbb451acf23cff1fce741ac20d7401a5dfc795c2\"" Jul 14 22:44:47.071004 env[1320]: time="2025-07-14T22:44:47.070979014Z" level=info msg="StartContainer for \"c37f581ada0c3c722c1fe7b0dbb451acf23cff1fce741ac20d7401a5dfc795c2\"" Jul 14 22:44:47.120143 env[1320]: time="2025-07-14T22:44:47.120097988Z" level=info msg="StartContainer for \"c37f581ada0c3c722c1fe7b0dbb451acf23cff1fce741ac20d7401a5dfc795c2\" returns successfully" Jul 14 22:44:47.305275 kubelet[2215]: I0714 22:44:47.305082 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-q9tk2" podStartSLOduration=57.363144903 podStartE2EDuration="1m14.305061445s" podCreationTimestamp="2025-07-14 22:43:33 +0000 UTC" firstStartedPulling="2025-07-14 22:44:29.630917637 +0000 UTC m=+82.072259780" lastFinishedPulling="2025-07-14 22:44:46.572834179 +0000 UTC m=+99.014176322" observedRunningTime="2025-07-14 22:44:47.301897471 +0000 UTC m=+99.743239614" watchObservedRunningTime="2025-07-14 22:44:47.305061445 +0000 UTC m=+99.746403588" Jul 14 22:44:47.320636 kubelet[2215]: I0714 22:44:47.319019 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f66f5ffdc-799wc" podStartSLOduration=60.368655387 podStartE2EDuration="1m17.318998007s" podCreationTimestamp="2025-07-14 22:43:30 +0000 UTC" firstStartedPulling="2025-07-14 22:44:30.105082814 +0000 UTC m=+82.546424957" lastFinishedPulling="2025-07-14 22:44:47.055425434 +0000 UTC m=+99.496767577" observedRunningTime="2025-07-14 22:44:47.31878736 +0000 UTC m=+99.760129493" watchObservedRunningTime="2025-07-14 22:44:47.318998007 +0000 UTC m=+99.760340150" Jul 14 22:44:47.342000 audit[5310]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:47.342000 audit[5310]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe97e20ae0 a2=0 a3=7ffe97e20acc items=0 ppid=2323 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:47.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:47.355000 audit[5310]: NETFILTER_CFG table=nat:128 family=2 entries=24 op=nft_register_rule pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:47.355000 audit[5310]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe97e20ae0 a2=0 a3=7ffe97e20acc items=0 ppid=2323 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:47.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:47.375000 audit[5316]: NETFILTER_CFG table=filter:129 family=2 entries=10 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:47.375000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdb577de40 a2=0 a3=7ffdb577de2c items=0 ppid=2323 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:47.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:47.379000 audit[5316]: NETFILTER_CFG table=nat:130 family=2 entries=32 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:47.379000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffdb577de40 a2=0 a3=7ffdb577de2c items=0 ppid=2323 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:47.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:49.298576 kubelet[2215]: I0714 22:44:49.298522 2215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:44:49.316813 systemd[1]: run-containerd-runc-k8s.io-4ad833d0cc721cbc4e1616bb2502aeadf33726dd82c18bc322b774ab6ef7566a-runc.g83tL0.mount: Deactivated successfully. Jul 14 22:44:49.587000 audit[5368]: NETFILTER_CFG table=filter:131 family=2 entries=10 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:49.587000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdc35f3520 a2=0 a3=7ffdc35f350c items=0 ppid=2323 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:49.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:49.593000 audit[5368]: NETFILTER_CFG table=nat:132 family=2 entries=36 op=nft_register_chain pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:44:49.593000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffdc35f3520 a2=0 a3=7ffdc35f350c items=0 ppid=2323 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:49.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:44:49.790544 env[1320]: time="2025-07-14T22:44:49.790475656Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:49.792114 env[1320]: time="2025-07-14T22:44:49.792084780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:49.793607 env[1320]: time="2025-07-14T22:44:49.793566833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:49.794800 env[1320]: time="2025-07-14T22:44:49.794762117Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:44:49.795171 env[1320]: time="2025-07-14T22:44:49.795144126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:44:49.796823 env[1320]: time="2025-07-14T22:44:49.796796031Z" level=info msg="CreateContainer within sandbox \"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:44:49.807656 env[1320]: time="2025-07-14T22:44:49.807620565Z" level=info msg="CreateContainer within sandbox \"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9bf597bc301fc894cfcb5c865ce13b3845fcb8f124fc406ae02a45e9c403791f\"" Jul 14 22:44:49.807953 env[1320]: time="2025-07-14T22:44:49.807933104Z" level=info msg="StartContainer for \"9bf597bc301fc894cfcb5c865ce13b3845fcb8f124fc406ae02a45e9c403791f\"" Jul 14 22:44:49.847096 env[1320]: time="2025-07-14T22:44:49.846621661Z" level=info msg="StartContainer for \"9bf597bc301fc894cfcb5c865ce13b3845fcb8f124fc406ae02a45e9c403791f\" returns successfully" Jul 14 22:44:50.053805 kubelet[2215]: I0714 22:44:50.053765 2215 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:44:50.055332 kubelet[2215]: I0714 22:44:50.055287 2215 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:44:50.317019 kubelet[2215]: I0714 22:44:50.316952 2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lx29x" podStartSLOduration=52.740018395999996 podStartE2EDuration="1m17.316932167s" podCreationTimestamp="2025-07-14 22:43:33 +0000 UTC" firstStartedPulling="2025-07-14 22:44:25.218861305 +0000 UTC m=+77.660203448" lastFinishedPulling="2025-07-14 22:44:49.795775076 +0000 UTC m=+102.237117219" observedRunningTime="2025-07-14 22:44:50.313035723 +0000 UTC m=+102.754377866" watchObservedRunningTime="2025-07-14 22:44:50.316932167 +0000 UTC m=+102.758274300" Jul 14 22:44:50.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.12:22-10.0.0.1:39874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:50.559753 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:39874.service. Jul 14 22:44:50.560945 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 14 22:44:50.561028 kernel: audit: type=1130 audit(1752533090.558:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.12:22-10.0.0.1:39874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:50.601000 audit[5408]: USER_ACCT pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.602370 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 39874 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:50.609926 kernel: audit: type=1101 audit(1752533090.601:547): pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.610001 kernel: audit: type=1103 audit(1752533090.605:548): pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.605000 audit[5408]: CRED_ACQ pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.606818 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:50.605000 audit[5408]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a7c62a0 a2=3 a3=0 items=0 ppid=1 pid=5408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:50.613290 systemd-logind[1309]: New session 18 of user core. Jul 14 22:44:50.614445 systemd[1]: Started session-18.scope. Jul 14 22:44:50.617201 kernel: audit: type=1006 audit(1752533090.605:549): pid=5408 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 14 22:44:50.617257 kernel: audit: type=1300 audit(1752533090.605:549): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a7c62a0 a2=3 a3=0 items=0 ppid=1 pid=5408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:50.618714 kernel: audit: type=1327 audit(1752533090.605:549): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:50.605000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:50.619000 audit[5408]: USER_START pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.620000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.628421 kernel: audit: type=1105 audit(1752533090.619:550): pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.628463 kernel: audit: type=1103 audit(1752533090.620:551): pid=5411 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.649303 kubelet[2215]: E0714 22:44:50.649268 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:50.942780 sshd[5408]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:50.942000 audit[5408]: USER_END pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.945176 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:39874.service: Deactivated successfully. Jul 14 22:44:50.945903 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:44:50.946860 systemd-logind[1309]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:44:50.947892 systemd-logind[1309]: Removed session 18. Jul 14 22:44:50.942000 audit[5408]: CRED_DISP pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.952646 kernel: audit: type=1106 audit(1752533090.942:552): pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.952710 kernel: audit: type=1104 audit(1752533090.942:553): pid=5408 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:50.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.12:22-10.0.0.1:39874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:55.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.12:22-10.0.0.1:39884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:55.945976 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:39884.service. Jul 14 22:44:55.951288 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:44:55.951698 kernel: audit: type=1130 audit(1752533095.945:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.12:22-10.0.0.1:39884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:55.988000 audit[5464]: USER_ACCT pid=5464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:55.989219 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 39884 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:44:55.990513 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:55.989000 audit[5464]: CRED_ACQ pid=5464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:55.996321 kernel: audit: type=1101 audit(1752533095.988:556): pid=5464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:55.996417 kernel: audit: type=1103 audit(1752533095.989:557): pid=5464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:55.996439 kernel: audit: type=1006 audit(1752533095.989:558): pid=5464 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jul 14 22:44:55.996770 systemd-logind[1309]: New session 19 of user core. Jul 14 22:44:55.997481 systemd[1]: Started session-19.scope. Jul 14 22:44:56.002529 kernel: audit: type=1300 audit(1752533095.989:558): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd21538170 a2=3 a3=0 items=0 ppid=1 pid=5464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:55.989000 audit[5464]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd21538170 a2=3 a3=0 items=0 ppid=1 pid=5464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:44:55.989000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:56.004200 kernel: audit: type=1327 audit(1752533095.989:558): proctitle=737368643A20636F7265205B707269765D Jul 14 22:44:56.004257 kernel: audit: type=1105 audit(1752533096.001:559): pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.001000 audit[5464]: USER_START pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.002000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.011496 kernel: audit: type=1103 audit(1752533096.002:560): pid=5467 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.164835 sshd[5464]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:56.164000 audit[5464]: USER_END pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.167138 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:39884.service: Deactivated successfully. Jul 14 22:44:56.168278 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:44:56.168598 systemd-logind[1309]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:44:56.169638 systemd-logind[1309]: Removed session 19. Jul 14 22:44:56.164000 audit[5464]: CRED_DISP pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.173202 kernel: audit: type=1106 audit(1752533096.164:561): pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.173472 kernel: audit: type=1104 audit(1752533096.164:562): pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:44:56.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.12:22-10.0.0.1:39884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:44:56.649192 kubelet[2215]: E0714 22:44:56.649140 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:45:01.167424 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:41810.service. Jul 14 22:45:01.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.12:22-10.0.0.1:41810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:01.169442 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:45:01.169502 kernel: audit: type=1130 audit(1752533101.166:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.12:22-10.0.0.1:41810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:01.210000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.211644 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 41810 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:01.212796 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:01.210000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.216173 systemd-logind[1309]: New session 20 of user core. Jul 14 22:45:01.216873 systemd[1]: Started session-20.scope. Jul 14 22:45:01.218811 kernel: audit: type=1101 audit(1752533101.210:565): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.218856 kernel: audit: type=1103 audit(1752533101.210:566): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.218879 kernel: audit: type=1006 audit(1752533101.210:567): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jul 14 22:45:01.210000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef16760c0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:01.224942 kernel: audit: type=1300 audit(1752533101.210:567): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef16760c0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:01.225019 kernel: audit: type=1327 audit(1752533101.210:567): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:01.210000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:01.218000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.230355 kernel: audit: type=1105 audit(1752533101.218:568): pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.230393 kernel: audit: type=1103 audit(1752533101.220:569): pid=5503 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.220000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.356569 sshd[5500]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:01.357000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.360299 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:41810.service: Deactivated successfully. Jul 14 22:45:01.361055 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:45:01.362996 kernel: audit: type=1106 audit(1752533101.357:570): pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.357000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.363469 systemd-logind[1309]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:45:01.364512 systemd-logind[1309]: Removed session 20. Jul 14 22:45:01.366982 kernel: audit: type=1104 audit(1752533101.357:571): pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:01.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.12:22-10.0.0.1:41810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:06.359157 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:41822.service. Jul 14 22:45:06.363996 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:45:06.364148 kernel: audit: type=1130 audit(1752533106.358:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.12:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:06.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.12:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:06.425000 audit[5520]: USER_ACCT pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.426000 audit[5520]: CRED_ACQ pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.430208 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:06.434578 kernel: audit: type=1101 audit(1752533106.425:574): pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.434613 kernel: audit: type=1103 audit(1752533106.426:575): pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.434631 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 41822 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:06.436993 kernel: audit: type=1006 audit(1752533106.426:576): pid=5520 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 14 22:45:06.426000 audit[5520]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea43376c0 a2=3 a3=0 items=0 ppid=1 pid=5520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:06.438185 systemd[1]: Started session-21.scope. Jul 14 22:45:06.438501 systemd-logind[1309]: New session 21 of user core. Jul 14 22:45:06.441096 kernel: audit: type=1300 audit(1752533106.426:576): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea43376c0 a2=3 a3=0 items=0 ppid=1 pid=5520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:06.441152 kernel: audit: type=1327 audit(1752533106.426:576): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:06.426000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:06.443000 audit[5520]: USER_START pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.444000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.451856 kernel: audit: type=1105 audit(1752533106.443:577): pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.451900 kernel: audit: type=1103 audit(1752533106.444:578): pid=5523 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.654000 audit[5520]: USER_END pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.656718 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:41834.service. Jul 14 22:45:06.654215 sshd[5520]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:06.658070 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:41822.service: Deactivated successfully. Jul 14 22:45:06.658712 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:45:06.654000 audit[5520]: CRED_DISP pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.662757 kernel: audit: type=1106 audit(1752533106.654:579): pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.662981 kernel: audit: type=1104 audit(1752533106.654:580): pid=5520 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.12:22-10.0.0.1:41834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:06.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.12:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:06.663334 systemd-logind[1309]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:45:06.664329 systemd-logind[1309]: Removed session 21. Jul 14 22:45:06.695000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.696705 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 41834 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:06.696000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.696000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc4cb5520 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:06.696000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:06.697747 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:06.701234 systemd-logind[1309]: New session 22 of user core. Jul 14 22:45:06.701927 systemd[1]: Started session-22.scope. Jul 14 22:45:06.705000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:06.706000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.652612 sshd[5533]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:07.652000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.652000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.655347 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:41836.service. Jul 14 22:45:07.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.12:22-10.0.0.1:41836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:07.658043 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:41834.service: Deactivated successfully. Jul 14 22:45:07.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.12:22-10.0.0.1:41834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:07.659219 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:45:07.660394 systemd-logind[1309]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:45:07.661184 systemd-logind[1309]: Removed session 22. Jul 14 22:45:07.696000 audit[5544]: USER_ACCT pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.698029 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 41836 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:07.697000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.697000 audit[5544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff91e41570 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:07.697000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:07.699119 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:07.702593 systemd-logind[1309]: New session 23 of user core. Jul 14 22:45:07.703544 systemd[1]: Started session-23.scope. Jul 14 22:45:07.707000 audit[5544]: USER_START pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:07.708000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.547000 audit[5564]: NETFILTER_CFG table=filter:133 family=2 entries=22 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:09.547000 audit[5564]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffd8c30da60 a2=0 a3=7ffd8c30da4c items=0 ppid=2323 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:09.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:09.552000 audit[5564]: NETFILTER_CFG table=nat:134 family=2 entries=24 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:09.552000 audit[5564]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffd8c30da60 a2=0 a3=0 items=0 ppid=2323 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:09.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:09.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.12:22-10.0.0.1:59070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:09.567678 sshd[5544]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:09.568000 audit[5544]: USER_END pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.568000 audit[5544]: CRED_DISP pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.568000 audit[5567]: NETFILTER_CFG table=filter:135 family=2 entries=34 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:09.568000 audit[5567]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffe7fa87cc0 a2=0 a3=7ffe7fa87cac items=0 ppid=2323 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:09.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:09.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.12:22-10.0.0.1:41836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:09.566801 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:59070.service. Jul 14 22:45:09.570612 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:41836.service: Deactivated successfully. Jul 14 22:45:09.572090 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:45:09.572578 systemd-logind[1309]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:45:09.574446 systemd-logind[1309]: Removed session 23. Jul 14 22:45:09.573000 audit[5567]: NETFILTER_CFG table=nat:136 family=2 entries=24 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:09.573000 audit[5567]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe7fa87cc0 a2=0 a3=0 items=0 ppid=2323 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:09.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:09.607464 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 59070 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:09.608613 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:09.606000 audit[5566]: USER_ACCT pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.607000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.607000 audit[5566]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeece239e0 a2=3 a3=0 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:09.607000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:09.612776 systemd[1]: Started session-24.scope. Jul 14 22:45:09.613081 systemd-logind[1309]: New session 24 of user core. Jul 14 22:45:09.616000 audit[5566]: USER_START pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.617000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:09.957799 systemd[1]: run-containerd-runc-k8s.io-2e264eaf1013788458bc3ce6fad8414cae4f82efc46a599a1fc04a29ba23be15-runc.Zbu7cC.mount: Deactivated successfully. Jul 14 22:45:10.336031 sshd[5566]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:10.337843 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:59074.service. Jul 14 22:45:10.336000 audit[5566]: USER_END pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.336000 audit[5566]: CRED_DISP pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.12:22-10.0.0.1:59074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:10.340041 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:59070.service: Deactivated successfully. Jul 14 22:45:10.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.12:22-10.0.0.1:59070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:10.340848 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:45:10.341437 systemd-logind[1309]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:45:10.345997 systemd-logind[1309]: Removed session 24. Jul 14 22:45:10.384000 audit[5602]: USER_ACCT pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.385919 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 59074 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:10.385000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.385000 audit[5602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7de9cee0 a2=3 a3=0 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:10.385000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:10.387124 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:10.390786 systemd-logind[1309]: New session 25 of user core. Jul 14 22:45:10.391653 systemd[1]: Started session-25.scope. Jul 14 22:45:10.396000 audit[5602]: USER_START pid=5602 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.397000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.529652 sshd[5602]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:10.529000 audit[5602]: USER_END pid=5602 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.529000 audit[5602]: CRED_DISP pid=5602 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:10.531901 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:59074.service: Deactivated successfully. Jul 14 22:45:10.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.12:22-10.0.0.1:59074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:10.532799 systemd-logind[1309]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:45:10.532869 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:45:10.533481 systemd-logind[1309]: Removed session 25. Jul 14 22:45:11.394004 env[1320]: time="2025-07-14T22:45:11.393627367Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.580 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"573e0651-d8b7-4359-8549-45a022613024", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e", Pod:"coredns-7c65d6cfc9-scf5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1613c8eb5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.581 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.581 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" iface="eth0" netns="" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.581 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.581 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.644 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.646 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.646 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.660 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.661 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.662 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:11.666326 env[1320]: 2025-07-14 22:45:11.664 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.667626 env[1320]: time="2025-07-14T22:45:11.666307055Z" level=info msg="TearDown network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" successfully" Jul 14 22:45:11.667626 env[1320]: time="2025-07-14T22:45:11.666348535Z" level=info msg="StopPodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" returns successfully" Jul 14 22:45:11.674629 env[1320]: time="2025-07-14T22:45:11.674584674Z" level=info msg="RemovePodSandbox for \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:45:11.674715 env[1320]: time="2025-07-14T22:45:11.674639871Z" level=info msg="Forcibly stopping sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\"" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.718 [WARNING][5655] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"573e0651-d8b7-4359-8549-45a022613024", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1052d16c678f9e8fc859ada33f586eb0ae8320c06857acd55fc093e283084c5e", Pod:"coredns-7c65d6cfc9-scf5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1613c8eb5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.718 [INFO][5655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.718 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" iface="eth0" netns="" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.718 [INFO][5655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.718 [INFO][5655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.741 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.741 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.742 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.747 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.747 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" HandleID="k8s-pod-network.a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Workload="localhost-k8s-coredns--7c65d6cfc9--scf5h-eth0" Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.748 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:11.753539 env[1320]: 2025-07-14 22:45:11.750 [INFO][5655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897" Jul 14 22:45:11.754337 env[1320]: time="2025-07-14T22:45:11.753756630Z" level=info msg="TearDown network for sandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" successfully" Jul 14 22:45:11.762581 env[1320]: time="2025-07-14T22:45:11.762535447Z" level=info msg="RemovePodSandbox \"a1c1c6961d6cb40d47065fffdad953972de2707a7531f8bed2c860c983d20897\" returns successfully" Jul 14 22:45:11.769018 env[1320]: time="2025-07-14T22:45:11.768982671Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.835 [WARNING][5680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"28f9fef2-ff3a-4233-92f1-c94976e9b138", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a", Pod:"calico-apiserver-5f66f5ffdc-7hdt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ac7f929035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.835 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.835 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" iface="eth0" netns="" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.835 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.835 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.855 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.855 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.855 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.860 [WARNING][5688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.860 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.861 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:11.865146 env[1320]: 2025-07-14 22:45:11.862 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.865765 env[1320]: time="2025-07-14T22:45:11.865188738Z" level=info msg="TearDown network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" successfully" Jul 14 22:45:11.865765 env[1320]: time="2025-07-14T22:45:11.865231582Z" level=info msg="StopPodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" returns successfully" Jul 14 22:45:11.865765 env[1320]: time="2025-07-14T22:45:11.865708042Z" level=info msg="RemovePodSandbox for \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:45:11.865765 env[1320]: time="2025-07-14T22:45:11.865729944Z" level=info msg="Forcibly stopping sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\"" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.905 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"28f9fef2-ff3a-4233-92f1-c94976e9b138", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b3ee7cc2bb646c3475920a41c73eae4769cd0cd66cee7decf91bd318cfa4e6a", Pod:"calico-apiserver-5f66f5ffdc-7hdt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ac7f929035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.906 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.906 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" iface="eth0" netns="" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.906 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.906 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.929 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.930 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.930 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.936 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.936 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" HandleID="k8s-pod-network.e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--7hdt4-eth0" Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.937 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:11.941341 env[1320]: 2025-07-14 22:45:11.939 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4" Jul 14 22:45:11.941341 env[1320]: time="2025-07-14T22:45:11.941305092Z" level=info msg="TearDown network for sandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" successfully" Jul 14 22:45:11.945295 env[1320]: time="2025-07-14T22:45:11.945256885Z" level=info msg="RemovePodSandbox \"e5dc0e7eeba64adc45448dcc2f370f82183125bbb8ef513f183f1f7fc34facf4\" returns successfully" Jul 14 22:45:11.945805 env[1320]: time="2025-07-14T22:45:11.945771259Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.975 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"63689819-3628-4d96-bf6f-7f8f144f2164", ResourceVersion:"1259", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a", Pod:"goldmane-58fd7646b9-q9tk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic558ab75054", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.976 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.976 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" iface="eth0" netns="" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.976 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.976 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.995 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.995 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:11.995 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:12.000 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:12.000 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:12.001 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.005017 env[1320]: 2025-07-14 22:45:12.002 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.006193 env[1320]: time="2025-07-14T22:45:12.005472001Z" level=info msg="TearDown network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" successfully" Jul 14 22:45:12.006193 env[1320]: time="2025-07-14T22:45:12.005516438Z" level=info msg="StopPodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" returns successfully" Jul 14 22:45:12.006193 env[1320]: time="2025-07-14T22:45:12.006048164Z" level=info msg="RemovePodSandbox for \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:45:12.006193 env[1320]: time="2025-07-14T22:45:12.006074014Z" level=info msg="Forcibly stopping sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\"" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.040 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"63689819-3628-4d96-bf6f-7f8f144f2164", ResourceVersion:"1259", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09b28c9d9dc830da7f8a17605bc3cf28f43f2402338c7601845995b4c6c59f4a", Pod:"goldmane-58fd7646b9-q9tk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic558ab75054", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.040 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.040 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" iface="eth0" netns="" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.040 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.040 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.081 [INFO][5767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.081 [INFO][5767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.081 [INFO][5767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.089 [WARNING][5767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.089 [INFO][5767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" HandleID="k8s-pod-network.832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Workload="localhost-k8s-goldmane--58fd7646b9--q9tk2-eth0" Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.090 [INFO][5767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.096955 env[1320]: 2025-07-14 22:45:12.094 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b" Jul 14 22:45:12.098452 env[1320]: time="2025-07-14T22:45:12.098411028Z" level=info msg="TearDown network for sandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" successfully" Jul 14 22:45:12.104003 env[1320]: time="2025-07-14T22:45:12.103942679Z" level=info msg="RemovePodSandbox \"832e2e651a8a4eca226ab8d7fa7cf7a9fad375f1b17e081b2b3db582ec3b754b\" returns successfully" Jul 14 22:45:12.104359 env[1320]: time="2025-07-14T22:45:12.104332692Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.132 [WARNING][5785] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lx29x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26698740-5794-455a-b832-1e56047f0f19", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930", Pod:"csi-node-driver-lx29x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1870ab4050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.133 [INFO][5785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.133 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" iface="eth0" netns="" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.133 [INFO][5785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.133 [INFO][5785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.156 [INFO][5793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.157 [INFO][5793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.157 [INFO][5793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.163 [WARNING][5793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.163 [INFO][5793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.164 [INFO][5793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.167991 env[1320]: 2025-07-14 22:45:12.166 [INFO][5785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.168707 env[1320]: time="2025-07-14T22:45:12.168342760Z" level=info msg="TearDown network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" successfully" Jul 14 22:45:12.168707 env[1320]: time="2025-07-14T22:45:12.168577222Z" level=info msg="StopPodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" returns successfully" Jul 14 22:45:12.169287 env[1320]: time="2025-07-14T22:45:12.169241976Z" level=info msg="RemovePodSandbox for \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:45:12.169355 env[1320]: time="2025-07-14T22:45:12.169296962Z" level=info msg="Forcibly stopping sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\"" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.202 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lx29x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26698740-5794-455a-b832-1e56047f0f19", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85808df1562ef3d85072bddd6d568648279d9838ac5010d14eb2b9949a713930", Pod:"csi-node-driver-lx29x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1870ab4050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.202 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.202 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" iface="eth0" netns="" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.202 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.202 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.223 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.223 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.223 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.229 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.229 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" HandleID="k8s-pod-network.62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Workload="localhost-k8s-csi--node--driver--lx29x-eth0" Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.231 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.234836 env[1320]: 2025-07-14 22:45:12.232 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d" Jul 14 22:45:12.234836 env[1320]: time="2025-07-14T22:45:12.234802807Z" level=info msg="TearDown network for sandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" successfully" Jul 14 22:45:12.238582 env[1320]: time="2025-07-14T22:45:12.238528572Z" level=info msg="RemovePodSandbox \"62c323b9d71384740d226352c88b7cf31a5ac89331de58c4b0c013f33defcc5d\" returns successfully" Jul 14 22:45:12.239037 env[1320]: time="2025-07-14T22:45:12.239012988Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.279 [WARNING][5839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0", GenerateName:"calico-kube-controllers-87ddffd96-", Namespace:"calico-system", SelfLink:"", UID:"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87ddffd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831", Pod:"calico-kube-controllers-87ddffd96-qc6h6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7c82abb245", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.279 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.280 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" iface="eth0" netns="" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.280 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.280 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.301 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.301 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.301 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.306 [WARNING][5847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.306 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.307 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.310777 env[1320]: 2025-07-14 22:45:12.309 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.311304 env[1320]: time="2025-07-14T22:45:12.310809850Z" level=info msg="TearDown network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" successfully" Jul 14 22:45:12.311304 env[1320]: time="2025-07-14T22:45:12.310842323Z" level=info msg="StopPodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" returns successfully" Jul 14 22:45:12.311354 env[1320]: time="2025-07-14T22:45:12.311308844Z" level=info msg="RemovePodSandbox for \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:45:12.311376 env[1320]: time="2025-07-14T22:45:12.311340645Z" level=info msg="Forcibly stopping sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\"" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.339 [WARNING][5864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0", GenerateName:"calico-kube-controllers-87ddffd96-", Namespace:"calico-system", SelfLink:"", UID:"cb90fe31-1c87-48f5-81dd-a9f3638c4eaf", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87ddffd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0140d3697f10fcbf5b0f86d9cd5df2b0e4a261020e952f24c52201802346d831", Pod:"calico-kube-controllers-87ddffd96-qc6h6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7c82abb245", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.339 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.339 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" iface="eth0" netns="" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.340 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.340 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.359 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.360 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.360 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.364 [WARNING][5873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.365 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" HandleID="k8s-pod-network.ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Workload="localhost-k8s-calico--kube--controllers--87ddffd96--qc6h6-eth0" Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.366 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.370289 env[1320]: 2025-07-14 22:45:12.368 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803" Jul 14 22:45:12.370745 env[1320]: time="2025-07-14T22:45:12.370316021Z" level=info msg="TearDown network for sandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" successfully" Jul 14 22:45:12.373604 env[1320]: time="2025-07-14T22:45:12.373582358Z" level=info msg="RemovePodSandbox \"ac1f719b227bb3537038eb5dd219738bef281cf1f4065b1bfda51fcf61d70803\" returns successfully" Jul 14 22:45:12.375692 env[1320]: time="2025-07-14T22:45:12.375658396Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.405 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e904006e-54c2-458a-afd4-0856ab783ed3", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a", Pod:"calico-apiserver-5f66f5ffdc-799wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84055f810a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.405 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.405 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" iface="eth0" netns="" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.405 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.405 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.423 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.423 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.423 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.432 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.432 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.434 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.438121 env[1320]: 2025-07-14 22:45:12.436 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.438928 env[1320]: time="2025-07-14T22:45:12.438149732Z" level=info msg="TearDown network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" successfully" Jul 14 22:45:12.438928 env[1320]: time="2025-07-14T22:45:12.438177836Z" level=info msg="StopPodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" returns successfully" Jul 14 22:45:12.438928 env[1320]: time="2025-07-14T22:45:12.438740122Z" level=info msg="RemovePodSandbox for \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:45:12.438928 env[1320]: time="2025-07-14T22:45:12.438777053Z" level=info msg="Forcibly stopping sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\"" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.466 [WARNING][5919] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0", GenerateName:"calico-apiserver-5f66f5ffdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e904006e-54c2-458a-afd4-0856ab783ed3", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f66f5ffdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"182847537de6430699bdfeb1fd5be76ccfed2f4d67ea9564fed7651d1aaa851a", Pod:"calico-apiserver-5f66f5ffdc-799wc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84055f810a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.467 [INFO][5919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.467 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" iface="eth0" netns="" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.467 [INFO][5919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.468 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.489 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.490 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.490 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.494 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.495 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" HandleID="k8s-pod-network.cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Workload="localhost-k8s-calico--apiserver--5f66f5ffdc--799wc-eth0" Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.496 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.500274 env[1320]: 2025-07-14 22:45:12.498 [INFO][5919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958" Jul 14 22:45:12.500274 env[1320]: time="2025-07-14T22:45:12.500227398Z" level=info msg="TearDown network for sandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" successfully" Jul 14 22:45:12.504590 env[1320]: time="2025-07-14T22:45:12.504542740Z" level=info msg="RemovePodSandbox \"cde8c35fe4fd8b533683484a057addab896e45cfdd4320a5896d924889f16958\" returns successfully" Jul 14 22:45:12.505039 env[1320]: time="2025-07-14T22:45:12.505016385Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.534 [WARNING][5945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"44d35327-7f5c-4584-8b0a-dbf8a90adea6", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e", Pod:"coredns-7c65d6cfc9-j6xgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2262a24860", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.534 [INFO][5945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.534 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" iface="eth0" netns="" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.534 [INFO][5945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.534 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.553 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.553 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.553 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.559 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.559 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.560 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.564209 env[1320]: 2025-07-14 22:45:12.562 [INFO][5945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.564687 env[1320]: time="2025-07-14T22:45:12.564235882Z" level=info msg="TearDown network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" successfully" Jul 14 22:45:12.564687 env[1320]: time="2025-07-14T22:45:12.564264937Z" level=info msg="StopPodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" returns successfully" Jul 14 22:45:12.564814 env[1320]: time="2025-07-14T22:45:12.564781456Z" level=info msg="RemovePodSandbox for \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:45:12.564861 env[1320]: time="2025-07-14T22:45:12.564820270Z" level=info msg="Forcibly stopping sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\"" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.592 [WARNING][5973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"44d35327-7f5c-4584-8b0a-dbf8a90adea6", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 43, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71f62ae51f43749d21e2fd2eef40d2e839923f6f089c1bfb6eba8b03b1b4617e", Pod:"coredns-7c65d6cfc9-j6xgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2262a24860", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.592 [INFO][5973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.592 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" iface="eth0" netns="" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.592 [INFO][5973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.592 [INFO][5973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.610 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.610 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.610 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.615 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.615 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" HandleID="k8s-pod-network.74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Workload="localhost-k8s-coredns--7c65d6cfc9--j6xgm-eth0" Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.616 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.619910 env[1320]: 2025-07-14 22:45:12.618 [INFO][5973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087" Jul 14 22:45:12.620375 env[1320]: time="2025-07-14T22:45:12.619931154Z" level=info msg="TearDown network for sandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" successfully" Jul 14 22:45:12.623661 env[1320]: time="2025-07-14T22:45:12.623615468Z" level=info msg="RemovePodSandbox \"74814d61b969299eea6dc7df11cc77582efdd5cbb3c87c778c41be549ed8c087\" returns successfully" Jul 14 22:45:12.624175 env[1320]: time="2025-07-14T22:45:12.624151353Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.656 [WARNING][5999] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" WorkloadEndpoint="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.657 [INFO][5999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.657 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" iface="eth0" netns="" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.657 [INFO][5999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.657 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.677 [INFO][6008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.677 [INFO][6008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.677 [INFO][6008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.682 [WARNING][6008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.682 [INFO][6008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.684 [INFO][6008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.687595 env[1320]: 2025-07-14 22:45:12.685 [INFO][5999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.688087 env[1320]: time="2025-07-14T22:45:12.687630324Z" level=info msg="TearDown network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" successfully" Jul 14 22:45:12.688087 env[1320]: time="2025-07-14T22:45:12.687659321Z" level=info msg="StopPodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" returns successfully" Jul 14 22:45:12.688134 env[1320]: time="2025-07-14T22:45:12.688114069Z" level=info msg="RemovePodSandbox for \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:45:12.688172 env[1320]: time="2025-07-14T22:45:12.688137554Z" level=info msg="Forcibly stopping sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\"" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.716 [WARNING][6025] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" WorkloadEndpoint="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.716 [INFO][6025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.716 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" iface="eth0" netns="" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.716 [INFO][6025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.716 [INFO][6025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.739 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.739 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.739 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.743 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.743 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" HandleID="k8s-pod-network.6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Workload="localhost-k8s-whisker--844f5b784b--xzh64-eth0" Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.745 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:45:12.749155 env[1320]: 2025-07-14 22:45:12.747 [INFO][6025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc" Jul 14 22:45:12.749574 env[1320]: time="2025-07-14T22:45:12.749187325Z" level=info msg="TearDown network for sandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" successfully" Jul 14 22:45:12.753602 env[1320]: time="2025-07-14T22:45:12.753486687Z" level=info msg="RemovePodSandbox \"6b4839eb31c1a90f9ddb1eddf6c21d13658eb334454b17b6514fb8e46d50c6cc\" returns successfully" Jul 14 22:45:15.533475 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:59082.service. Jul 14 22:45:15.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.12:22-10.0.0.1:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:15.534844 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 14 22:45:15.534898 kernel: audit: type=1130 audit(1752533115.532:622): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.12:22-10.0.0.1:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:15.576000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.577628 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 59082 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:15.581734 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:15.580000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.587838 kernel: audit: type=1101 audit(1752533115.576:623): pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.588072 kernel: audit: type=1103 audit(1752533115.580:624): pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.588106 kernel: audit: type=1006 audit(1752533115.580:625): pid=6044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 14 22:45:15.588924 systemd[1]: Started session-26.scope. Jul 14 22:45:15.589866 systemd-logind[1309]: New session 26 of user core. Jul 14 22:45:15.580000 audit[6044]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2b70f8b0 a2=3 a3=0 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:15.597294 kernel: audit: type=1300 audit(1752533115.580:625): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2b70f8b0 a2=3 a3=0 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:15.580000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:15.599268 kernel: audit: type=1327 audit(1752533115.580:625): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:15.599321 kernel: audit: type=1105 audit(1752533115.594:626): pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.594000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.604528 kernel: audit: type=1103 audit(1752533115.596:627): pid=6047 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.596000 audit[6047]: CRED_ACQ pid=6047 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.743401 sshd[6044]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:15.743000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.746146 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:59082.service: Deactivated successfully. Jul 14 22:45:15.747144 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:45:15.748803 systemd-logind[1309]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:45:15.749540 systemd-logind[1309]: Removed session 26. Jul 14 22:45:15.743000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.753379 kernel: audit: type=1106 audit(1752533115.743:628): pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.753461 kernel: audit: type=1104 audit(1752533115.743:629): pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:15.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.12:22-10.0.0.1:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:16.140000 audit[6059]: NETFILTER_CFG table=filter:137 family=2 entries=22 op=nft_register_rule pid=6059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:16.140000 audit[6059]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd103d52f0 a2=0 a3=7ffd103d52dc items=0 ppid=2323 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:16.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:16.146000 audit[6059]: NETFILTER_CFG table=nat:138 family=2 entries=108 op=nft_register_chain pid=6059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:16.146000 audit[6059]: SYSCALL arch=c000003e syscall=46 success=yes exit=50220 a0=3 a1=7ffd103d52f0 a2=0 a3=7ffd103d52dc items=0 ppid=2323 pid=6059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:16.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:20.747466 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:41670.service. Jul 14 22:45:20.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.12:22-10.0.0.1:41670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:20.748579 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 14 22:45:20.748651 kernel: audit: type=1130 audit(1752533120.746:633): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.12:22-10.0.0.1:41670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:20.787000 audit[6061]: USER_ACCT pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.788779 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 41670 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:20.790764 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:20.788000 audit[6061]: CRED_ACQ pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.795709 systemd[1]: Started session-27.scope. Jul 14 22:45:20.796124 systemd-logind[1309]: New session 27 of user core. Jul 14 22:45:20.798008 kernel: audit: type=1101 audit(1752533120.787:634): pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.798066 kernel: audit: type=1103 audit(1752533120.788:635): pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.798111 kernel: audit: type=1006 audit(1752533120.788:636): pid=6061 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 14 22:45:20.788000 audit[6061]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc4e1b2c0 a2=3 a3=0 items=0 ppid=1 pid=6061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:20.803456 kernel: audit: type=1300 audit(1752533120.788:636): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc4e1b2c0 a2=3 a3=0 items=0 ppid=1 pid=6061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:20.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:20.805217 kernel: audit: type=1327 audit(1752533120.788:636): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:20.805271 kernel: audit: type=1105 audit(1752533120.803:637): pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.803000 audit[6061]: USER_START pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.804000 audit[6065]: CRED_ACQ pid=6065 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.812603 kernel: audit: type=1103 audit(1752533120.804:638): pid=6065 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.923080 sshd[6061]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:20.923000 audit[6061]: USER_END pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.925548 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:41670.service: Deactivated successfully. Jul 14 22:45:20.926505 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:45:20.930006 kernel: audit: type=1106 audit(1752533120.923:639): pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.923000 audit[6061]: CRED_DISP pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.12:22-10.0.0.1:41670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:20.935374 systemd-logind[1309]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:45:20.936067 kernel: audit: type=1104 audit(1752533120.923:640): pid=6061 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:20.936287 systemd-logind[1309]: Removed session 27. Jul 14 22:45:23.097000 audit[6121]: NETFILTER_CFG table=filter:139 family=2 entries=9 op=nft_register_rule pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:23.097000 audit[6121]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffcd8305720 a2=0 a3=7ffcd830570c items=0 ppid=2323 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:23.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:23.103000 audit[6121]: NETFILTER_CFG table=nat:140 family=2 entries=55 op=nft_register_chain pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 14 22:45:23.103000 audit[6121]: SYSCALL arch=c000003e syscall=46 success=yes exit=20100 a0=3 a1=7ffcd8305720 a2=0 a3=7ffcd830570c items=0 ppid=2323 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:23.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 14 22:45:25.926319 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:41686.service. Jul 14 22:45:25.933122 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 14 22:45:25.933292 kernel: audit: type=1130 audit(1752533125.924:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.12:22-10.0.0.1:41686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:25.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.12:22-10.0.0.1:41686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:25.979657 kernel: audit: type=1101 audit(1752533125.969:645): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.979812 kernel: audit: type=1103 audit(1752533125.970:646): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.969000 audit[6122]: USER_ACCT pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.970000 audit[6122]: CRED_ACQ pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.979980 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:25.972745 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:25.979674 systemd[1]: Started session-28.scope. Jul 14 22:45:25.980001 systemd-logind[1309]: New session 28 of user core. Jul 14 22:45:25.970000 audit[6122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd386f38f0 a2=3 a3=0 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:25.987255 kernel: audit: type=1006 audit(1752533125.970:647): pid=6122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jul 14 22:45:25.987451 kernel: audit: type=1300 audit(1752533125.970:647): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd386f38f0 a2=3 a3=0 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:25.987482 kernel: audit: type=1327 audit(1752533125.970:647): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:25.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:25.983000 audit[6122]: USER_START pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.992793 kernel: audit: type=1105 audit(1752533125.983:648): pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.984000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:25.996346 kernel: audit: type=1103 audit(1752533125.984:649): pid=6126 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:26.170519 sshd[6122]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:26.170000 audit[6122]: USER_END pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:26.173267 systemd-logind[1309]: Session 28 logged out. Waiting for processes to exit. Jul 14 22:45:26.173568 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:41686.service: Deactivated successfully. Jul 14 22:45:26.174546 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 22:45:26.174993 systemd-logind[1309]: Removed session 28. Jul 14 22:45:26.170000 audit[6122]: CRED_DISP pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:26.179878 kernel: audit: type=1106 audit(1752533126.170:650): pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:26.179939 kernel: audit: type=1104 audit(1752533126.170:651): pid=6122 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:26.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.12:22-10.0.0.1:41686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:26.658177 kubelet[2215]: E0714 22:45:26.658123 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:45:31.175121 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:43312.service. Jul 14 22:45:31.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.12:22-10.0.0.1:43312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:31.176536 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 14 22:45:31.176600 kernel: audit: type=1130 audit(1752533131.174:653): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.12:22-10.0.0.1:43312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:31.214000 audit[6137]: USER_ACCT pid=6137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.217300 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:45:31.220098 sshd[6137]: Accepted publickey for core from 10.0.0.1 port 43312 ssh2: RSA SHA256:A++kM18xTvsrQlkdeybdn2+NTVTg1c5zhKR3oJNSaMg Jul 14 22:45:31.216000 audit[6137]: CRED_ACQ pid=6137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.224994 kernel: audit: type=1101 audit(1752533131.214:654): pid=6137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.225148 kernel: audit: type=1103 audit(1752533131.216:655): pid=6137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.227642 kernel: audit: type=1006 audit(1752533131.216:656): pid=6137 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jul 14 22:45:31.226927 systemd-logind[1309]: New session 29 of user core. Jul 14 22:45:31.227208 systemd[1]: Started session-29.scope. Jul 14 22:45:31.233018 kernel: audit: type=1300 audit(1752533131.216:656): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff491b7100 a2=3 a3=0 items=0 ppid=1 pid=6137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:31.216000 audit[6137]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff491b7100 a2=3 a3=0 items=0 ppid=1 pid=6137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:45:31.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:31.235387 kernel: audit: type=1327 audit(1752533131.216:656): proctitle=737368643A20636F7265205B707269765D Jul 14 22:45:31.232000 audit[6137]: USER_START pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.234000 audit[6140]: CRED_ACQ pid=6140 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.243729 kernel: audit: type=1105 audit(1752533131.232:657): pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.243841 kernel: audit: type=1103 audit(1752533131.234:658): pid=6140 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.337045 sshd[6137]: pam_unix(sshd:session): session closed for user core Jul 14 22:45:31.336000 audit[6137]: USER_END pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.339129 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:43312.service: Deactivated successfully. Jul 14 22:45:31.339832 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 22:45:31.337000 audit[6137]: CRED_DISP pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.346115 kernel: audit: type=1106 audit(1752533131.336:659): pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.346158 kernel: audit: type=1104 audit(1752533131.337:660): pid=6137 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 14 22:45:31.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.12:22-10.0.0.1:43312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:45:31.346645 systemd-logind[1309]: Session 29 logged out. Waiting for processes to exit. Jul 14 22:45:31.347285 systemd-logind[1309]: Removed session 29.