Jul 15 11:29:00.842188 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:29:00.842210 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:29:00.842221 kernel: BIOS-provided physical RAM map: Jul 15 11:29:00.842228 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 11:29:00.842235 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 11:29:00.842242 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 11:29:00.842250 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 11:29:00.842256 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 11:29:00.842268 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:29:00.842274 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 11:29:00.842288 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 11:29:00.842308 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 11:29:00.842324 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 11:29:00.842332 kernel: NX (Execute Disable) protection: active Jul 15 11:29:00.842343 kernel: SMBIOS 2.8 present. Jul 15 11:29:00.842352 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 11:29:00.842359 kernel: Hypervisor detected: KVM Jul 15 11:29:00.842365 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:29:00.842371 kernel: kvm-clock: cpu 0, msr 5319b001, primary cpu clock Jul 15 11:29:00.842377 kernel: kvm-clock: using sched offset of 2485105562 cycles Jul 15 11:29:00.842383 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:29:00.842390 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:29:00.842396 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:29:00.842403 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:29:00.842409 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 11:29:00.842416 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:29:00.842422 kernel: Using GB pages for direct mapping Jul 15 11:29:00.842428 kernel: ACPI: Early table checksum verification disabled Jul 15 11:29:00.842434 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 11:29:00.842440 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842446 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842452 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842459 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 11:29:00.842474 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842487 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842496 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842504 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:29:00.842512 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 11:29:00.842520 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 11:29:00.842528 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 11:29:00.842541 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 11:29:00.842547 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 11:29:00.842554 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 11:29:00.842560 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 11:29:00.842567 kernel: No NUMA configuration found Jul 15 11:29:00.842573 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 11:29:00.842581 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 15 11:29:00.842587 kernel: Zone ranges: Jul 15 11:29:00.842594 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:29:00.842600 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 11:29:00.842607 kernel: Normal empty Jul 15 11:29:00.842616 kernel: Movable zone start for each node Jul 15 11:29:00.842624 kernel: Early memory node ranges Jul 15 11:29:00.842633 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 11:29:00.842655 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 11:29:00.842664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 11:29:00.842670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:29:00.842677 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 11:29:00.842683 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 11:29:00.842698 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:29:00.842704 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:29:00.842710 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:29:00.842717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:29:00.842725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:29:00.842731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:29:00.842741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:29:00.842750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:29:00.842758 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:29:00.842767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:29:00.842775 kernel: TSC deadline timer available Jul 15 11:29:00.842784 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:29:00.842792 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:29:00.842798 kernel: kvm-guest: setup PV sched yield Jul 15 11:29:00.842805 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 11:29:00.842813 kernel: Booting paravirtualized kernel on KVM Jul 15 11:29:00.842819 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:29:00.842826 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:29:00.842833 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:29:00.842839 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:29:00.842845 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:29:00.842852 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:29:00.842859 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 15 11:29:00.842868 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:29:00.842878 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:29:00.842887 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 15 11:29:00.842895 kernel: Policy zone: DMA32 Jul 15 11:29:00.842904 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:29:00.842911 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:29:00.842918 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:29:00.842924 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:29:00.842931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:29:00.842939 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 134796K reserved, 0K cma-reserved) Jul 15 11:29:00.842946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:29:00.842952 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:29:00.842958 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:29:00.842965 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:29:00.842972 kernel: rcu: RCU event tracing is enabled. Jul 15 11:29:00.842979 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:29:00.842988 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:29:00.842997 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:29:00.843007 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:29:00.843016 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:29:00.843024 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:29:00.843031 kernel: random: crng init done Jul 15 11:29:00.843037 kernel: Console: colour VGA+ 80x25 Jul 15 11:29:00.843044 kernel: printk: console [ttyS0] enabled Jul 15 11:29:00.843050 kernel: ACPI: Core revision 20210730 Jul 15 11:29:00.843057 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:29:00.843063 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:29:00.843071 kernel: x2apic enabled Jul 15 11:29:00.843080 kernel: Switched APIC routing to physical x2apic. Jul 15 11:29:00.843087 kernel: kvm-guest: setup PV IPIs Jul 15 11:29:00.843098 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:29:00.843107 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:29:00.843116 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:29:00.843125 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:29:00.843133 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:29:00.843140 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:29:00.843153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:29:00.843160 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:29:00.843167 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:29:00.843175 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:29:00.843181 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:29:00.843188 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:29:00.843195 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:29:00.843202 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:29:00.843209 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:29:00.843217 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:29:00.843224 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:29:00.843231 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:29:00.843237 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:29:00.843244 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:29:00.843251 kernel: LSM: Security Framework initializing Jul 15 11:29:00.843257 kernel: SELinux: Initializing. Jul 15 11:29:00.843264 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:29:00.843272 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:29:00.843279 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:29:00.843287 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:29:00.843296 kernel: ... version: 0 Jul 15 11:29:00.843305 kernel: ... bit width: 48 Jul 15 11:29:00.843313 kernel: ... generic registers: 6 Jul 15 11:29:00.843322 kernel: ... value mask: 0000ffffffffffff Jul 15 11:29:00.843331 kernel: ... max period: 00007fffffffffff Jul 15 11:29:00.843338 kernel: ... fixed-purpose events: 0 Jul 15 11:29:00.843346 kernel: ... event mask: 000000000000003f Jul 15 11:29:00.843353 kernel: signal: max sigframe size: 1776 Jul 15 11:29:00.843360 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:29:00.843367 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:29:00.843373 kernel: x86: Booting SMP configuration: Jul 15 11:29:00.843380 kernel: .... node #0, CPUs: #1 Jul 15 11:29:00.843387 kernel: kvm-clock: cpu 1, msr 5319b041, secondary cpu clock Jul 15 11:29:00.843394 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:29:00.843403 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 15 11:29:00.843414 kernel: #2 Jul 15 11:29:00.843423 kernel: kvm-clock: cpu 2, msr 5319b081, secondary cpu clock Jul 15 11:29:00.843432 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:29:00.843440 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 15 11:29:00.843447 kernel: #3 Jul 15 11:29:00.843453 kernel: kvm-clock: cpu 3, msr 5319b0c1, secondary cpu clock Jul 15 11:29:00.843460 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:29:00.843467 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 15 11:29:00.843474 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:29:00.843482 kernel: smpboot: Max logical packages: 1 Jul 15 11:29:00.843488 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:29:00.843495 kernel: devtmpfs: initialized Jul 15 11:29:00.843502 kernel: x86/mm: Memory block size: 128MB Jul 15 11:29:00.843509 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:29:00.843516 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:29:00.843523 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:29:00.843529 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:29:00.843536 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:29:00.843544 kernel: audit: type=2000 audit(1752578939.788:1): state=initialized audit_enabled=0 res=1 Jul 15 11:29:00.843551 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:29:00.843557 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:29:00.843564 kernel: cpuidle: using governor menu Jul 15 11:29:00.843571 kernel: ACPI: bus type PCI registered Jul 15 11:29:00.843578 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:29:00.843585 kernel: dca service started, version 1.12.1 Jul 15 11:29:00.843592 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:29:00.843599 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:29:00.843606 kernel: PCI: Using configuration type 1 for base access Jul 15 11:29:00.843613 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:29:00.843620 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:29:00.843627 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:29:00.843660 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:29:00.843670 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:29:00.843679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:29:00.843696 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:29:00.843705 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:29:00.843716 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:29:00.843723 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:29:00.843730 kernel: ACPI: Interpreter enabled Jul 15 11:29:00.843737 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:29:00.843744 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:29:00.843751 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:29:00.843758 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:29:00.843764 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:29:00.843891 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:29:00.843980 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:29:00.844063 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:29:00.844074 kernel: PCI host bridge to bus 0000:00 Jul 15 11:29:00.844495 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:29:00.844583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:29:00.844671 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:29:00.844761 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:29:00.844833 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:29:00.844902 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 11:29:00.844970 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:29:00.845072 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:29:00.845161 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:29:00.845242 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 15 11:29:00.845323 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 15 11:29:00.845390 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 15 11:29:00.845455 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:29:00.845548 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:29:00.845650 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 15 11:29:00.845742 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 15 11:29:00.845826 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 11:29:00.845918 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:29:00.846003 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 15 11:29:00.846076 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 15 11:29:00.846161 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 11:29:00.846258 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:29:00.846335 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 15 11:29:00.846435 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 15 11:29:00.846527 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 11:29:00.846615 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 15 11:29:00.846718 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:29:00.846804 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:29:00.846898 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:29:00.846978 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 15 11:29:00.847062 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 15 11:29:00.847154 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:29:00.847235 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 15 11:29:00.847247 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:29:00.847255 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:29:00.847261 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:29:00.847268 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:29:00.847275 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:29:00.847285 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:29:00.847295 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:29:00.847304 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:29:00.847312 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:29:00.847321 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:29:00.847331 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:29:00.847348 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:29:00.847363 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:29:00.847370 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:29:00.847381 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:29:00.847387 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:29:00.847394 kernel: iommu: Default domain type: Translated Jul 15 11:29:00.847401 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:29:00.847492 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:29:00.850742 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:29:00.850819 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:29:00.850829 kernel: vgaarb: loaded Jul 15 11:29:00.850837 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:29:00.850848 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:29:00.850854 kernel: PTP clock support registered Jul 15 11:29:00.850861 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:29:00.850868 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:29:00.850875 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 11:29:00.850882 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 11:29:00.850889 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:29:00.850895 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:29:00.850902 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:29:00.850910 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:29:00.850917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:29:00.850924 kernel: pnp: PnP ACPI init Jul 15 11:29:00.850997 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:29:00.851008 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:29:00.851015 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:29:00.851022 kernel: NET: Registered PF_INET protocol family Jul 15 11:29:00.851029 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:29:00.851037 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:29:00.851044 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:29:00.851051 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:29:00.851058 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:29:00.851065 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:29:00.851072 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:29:00.851079 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:29:00.851085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:29:00.851093 kernel: NET: Registered PF_XDP protocol family Jul 15 11:29:00.851154 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:29:00.851212 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:29:00.851270 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:29:00.851327 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:29:00.851383 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:29:00.851439 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 11:29:00.851448 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:29:00.851455 kernel: Initialise system trusted keyrings Jul 15 11:29:00.851464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:29:00.851471 kernel: Key type asymmetric registered Jul 15 11:29:00.851478 kernel: Asymmetric key parser 'x509' registered Jul 15 11:29:00.851485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:29:00.851492 kernel: io scheduler mq-deadline registered Jul 15 11:29:00.851498 kernel: io scheduler kyber registered Jul 15 11:29:00.851505 kernel: io scheduler bfq registered Jul 15 11:29:00.851512 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:29:00.851519 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:29:00.851528 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:29:00.851535 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:29:00.851542 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:29:00.851549 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:29:00.851556 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:29:00.851562 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:29:00.851569 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:29:00.851653 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:29:00.851665 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:29:00.851847 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:29:00.851944 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:29:00 UTC (1752578940) Jul 15 11:29:00.852010 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:29:00.852020 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:29:00.852028 kernel: Segment Routing with IPv6 Jul 15 11:29:00.852036 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:29:00.852044 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:29:00.852051 kernel: Key type dns_resolver registered Jul 15 11:29:00.852063 kernel: IPI shorthand broadcast: enabled Jul 15 11:29:00.852071 kernel: sched_clock: Marking stable (404624757, 100771346)->(567562690, -62166587) Jul 15 11:29:00.852079 kernel: registered taskstats version 1 Jul 15 11:29:00.852086 kernel: Loading compiled-in X.509 certificates Jul 15 11:29:00.852094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:29:00.852103 kernel: Key type .fscrypt registered Jul 15 11:29:00.852111 kernel: Key type fscrypt-provisioning registered Jul 15 11:29:00.852120 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:29:00.852129 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:29:00.852136 kernel: ima: No architecture policies found Jul 15 11:29:00.852144 kernel: clk: Disabling unused clocks Jul 15 11:29:00.852151 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:29:00.852158 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:29:00.852165 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:29:00.852173 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:29:00.852180 kernel: Run /init as init process Jul 15 11:29:00.852187 kernel: with arguments: Jul 15 11:29:00.852195 kernel: /init Jul 15 11:29:00.852204 kernel: with environment: Jul 15 11:29:00.852211 kernel: HOME=/ Jul 15 11:29:00.852218 kernel: TERM=linux Jul 15 11:29:00.852225 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:29:00.852236 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:29:00.852247 systemd[1]: Detected virtualization kvm. Jul 15 11:29:00.852255 systemd[1]: Detected architecture x86-64. Jul 15 11:29:00.852263 systemd[1]: Running in initrd. Jul 15 11:29:00.852272 systemd[1]: No hostname configured, using default hostname. Jul 15 11:29:00.852279 systemd[1]: Hostname set to . Jul 15 11:29:00.852287 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:29:00.852295 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:29:00.852303 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:29:00.852310 systemd[1]: Reached target cryptsetup.target. Jul 15 11:29:00.852318 systemd[1]: Reached target paths.target. Jul 15 11:29:00.852326 systemd[1]: Reached target slices.target. Jul 15 11:29:00.852335 systemd[1]: Reached target swap.target. Jul 15 11:29:00.852350 systemd[1]: Reached target timers.target. Jul 15 11:29:00.852360 systemd[1]: Listening on iscsid.socket. Jul 15 11:29:00.852368 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:29:00.852376 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:29:00.852385 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:29:00.852393 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:29:00.852401 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:29:00.852409 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:29:00.852417 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:29:00.852425 systemd[1]: Reached target sockets.target. Jul 15 11:29:00.852433 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:29:00.852440 systemd[1]: Finished network-cleanup.service. Jul 15 11:29:00.852448 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:29:00.852458 systemd[1]: Starting systemd-journald.service... Jul 15 11:29:00.852466 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:29:00.852473 systemd[1]: Starting systemd-resolved.service... Jul 15 11:29:00.852481 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:29:00.852489 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:29:00.852497 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:29:00.852505 kernel: audit: type=1130 audit(1752578940.843:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.852513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:29:00.852526 systemd-journald[198]: Journal started Jul 15 11:29:00.852572 systemd-journald[198]: Runtime Journal (/run/log/journal/eb0598515c494edaa793fc40f6d3a2a9) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:29:00.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.842502 systemd-modules-load[199]: Inserted module 'overlay' Jul 15 11:29:00.889326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:29:00.889341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:29:00.889355 kernel: Bridge firewalling registered Jul 15 11:29:00.889363 kernel: audit: type=1130 audit(1752578940.884:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.889373 systemd[1]: Started systemd-journald.service. Jul 15 11:29:00.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.870430 systemd-resolved[200]: Positive Trust Anchors: Jul 15 11:29:00.870437 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:29:00.870463 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:29:00.872584 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 15 11:29:00.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.880192 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 15 11:29:00.909505 kernel: audit: type=1130 audit(1752578940.891:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.909532 kernel: audit: type=1130 audit(1752578940.897:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.909545 kernel: audit: type=1130 audit(1752578940.901:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.892567 systemd[1]: Started systemd-resolved.service. Jul 15 11:29:00.911530 kernel: SCSI subsystem initialized Jul 15 11:29:00.898326 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:29:00.901671 systemd[1]: Reached target nss-lookup.target. Jul 15 11:29:00.909556 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:29:00.922197 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:29:00.922218 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:29:00.923390 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:29:00.926022 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 15 11:29:00.926607 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:29:00.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.927586 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:29:00.930662 kernel: audit: type=1130 audit(1752578940.926:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.933278 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:29:00.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.935971 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:29:00.939149 kernel: audit: type=1130 audit(1752578940.934:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.939293 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:29:00.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.943661 kernel: audit: type=1130 audit(1752578940.940:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:00.945269 dracut-cmdline[221]: dracut-dracut-053 Jul 15 11:29:00.947082 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:29:00.999678 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:29:01.014665 kernel: iscsi: registered transport (tcp) Jul 15 11:29:01.035690 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:29:01.035744 kernel: QLogic iSCSI HBA Driver Jul 15 11:29:01.068807 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:29:01.072994 kernel: audit: type=1130 audit(1752578941.068:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.073030 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:29:01.118659 kernel: raid6: avx2x4 gen() 29934 MB/s Jul 15 11:29:01.135659 kernel: raid6: avx2x4 xor() 7406 MB/s Jul 15 11:29:01.152659 kernel: raid6: avx2x2 gen() 32493 MB/s Jul 15 11:29:01.169658 kernel: raid6: avx2x2 xor() 19257 MB/s Jul 15 11:29:01.186659 kernel: raid6: avx2x1 gen() 26513 MB/s Jul 15 11:29:01.203657 kernel: raid6: avx2x1 xor() 15339 MB/s Jul 15 11:29:01.220664 kernel: raid6: sse2x4 gen() 14811 MB/s Jul 15 11:29:01.237660 kernel: raid6: sse2x4 xor() 7278 MB/s Jul 15 11:29:01.254660 kernel: raid6: sse2x2 gen() 16259 MB/s Jul 15 11:29:01.271661 kernel: raid6: sse2x2 xor() 9835 MB/s Jul 15 11:29:01.288657 kernel: raid6: sse2x1 gen() 12399 MB/s Jul 15 11:29:01.305993 kernel: raid6: sse2x1 xor() 7801 MB/s Jul 15 11:29:01.306006 kernel: raid6: using algorithm avx2x2 gen() 32493 MB/s Jul 15 11:29:01.306015 kernel: raid6: .... xor() 19257 MB/s, rmw enabled Jul 15 11:29:01.306685 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:29:01.318661 kernel: xor: automatically using best checksumming function avx Jul 15 11:29:01.408683 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:29:01.416359 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:29:01.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.417000 audit: BPF prog-id=7 op=LOAD Jul 15 11:29:01.417000 audit: BPF prog-id=8 op=LOAD Jul 15 11:29:01.418809 systemd[1]: Starting systemd-udevd.service... Jul 15 11:29:01.429996 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 15 11:29:01.433739 systemd[1]: Started systemd-udevd.service. Jul 15 11:29:01.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.435880 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:29:01.447007 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jul 15 11:29:01.469779 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:29:01.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.471524 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:29:01.507238 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:29:01.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:01.536003 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:29:01.541733 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:29:01.541746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:29:01.541755 kernel: GPT:9289727 != 19775487 Jul 15 11:29:01.541779 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:29:01.541793 kernel: GPT:9289727 != 19775487 Jul 15 11:29:01.541801 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:29:01.541810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:29:01.549840 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:29:01.549872 kernel: AES CTR mode by8 optimization enabled Jul 15 11:29:01.562660 kernel: libata version 3.00 loaded. Jul 15 11:29:01.571660 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:29:01.579940 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:29:01.579953 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:29:01.580035 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:29:01.580106 kernel: scsi host0: ahci Jul 15 11:29:01.580192 kernel: scsi host1: ahci Jul 15 11:29:01.580300 kernel: scsi host2: ahci Jul 15 11:29:01.580381 kernel: scsi host3: ahci Jul 15 11:29:01.580462 kernel: scsi host4: ahci Jul 15 11:29:01.580558 kernel: scsi host5: ahci Jul 15 11:29:01.580657 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 15 11:29:01.580676 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 15 11:29:01.580687 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 15 11:29:01.580696 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 15 11:29:01.580704 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 15 11:29:01.580713 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 15 11:29:01.575904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:29:01.629354 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Jul 15 11:29:01.631310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:29:01.636850 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:29:01.637862 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:29:01.644989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:29:01.646556 systemd[1]: Starting disk-uuid.service... Jul 15 11:29:01.892953 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:29:01.894238 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:29:01.894254 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:29:01.894268 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:29:01.894281 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:29:01.895690 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:29:01.896675 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:29:01.896693 kernel: ata3.00: applying bridge limits Jul 15 11:29:01.897978 kernel: ata3.00: configured for UDMA/100 Jul 15 11:29:01.898678 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:29:01.930681 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:29:01.947233 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:29:01.947248 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:29:02.122424 disk-uuid[538]: Primary Header is updated. Jul 15 11:29:02.122424 disk-uuid[538]: Secondary Entries is updated. Jul 15 11:29:02.122424 disk-uuid[538]: Secondary Header is updated. Jul 15 11:29:02.125809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:29:02.130659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:29:02.133667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:29:03.133667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:29:03.133943 disk-uuid[541]: The operation has completed successfully. Jul 15 11:29:03.161073 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:29:03.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.161147 systemd[1]: Finished disk-uuid.service. Jul 15 11:29:03.162121 systemd[1]: Starting verity-setup.service... Jul 15 11:29:03.173663 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:29:03.193217 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:29:03.194107 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:29:03.195570 systemd[1]: Finished verity-setup.service. Jul 15 11:29:03.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.254253 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:29:03.255684 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:29:03.255123 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:29:03.255800 systemd[1]: Starting ignition-setup.service... Jul 15 11:29:03.258363 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:29:03.264372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:29:03.264427 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:29:03.264443 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:29:03.271475 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:29:03.278264 systemd[1]: Finished ignition-setup.service. Jul 15 11:29:03.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.280443 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:29:03.315701 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:29:03.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.316000 audit: BPF prog-id=9 op=LOAD Jul 15 11:29:03.317836 systemd[1]: Starting systemd-networkd.service... Jul 15 11:29:03.319366 ignition[646]: Ignition 2.14.0 Jul 15 11:29:03.319378 ignition[646]: Stage: fetch-offline Jul 15 11:29:03.319432 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:03.319440 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:03.320718 ignition[646]: parsed url from cmdline: "" Jul 15 11:29:03.320722 ignition[646]: no config URL provided Jul 15 11:29:03.320729 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:29:03.320738 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:29:03.320756 ignition[646]: op(1): [started] loading QEMU firmware config module Jul 15 11:29:03.320763 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:29:03.328480 ignition[646]: op(1): [finished] loading QEMU firmware config module Jul 15 11:29:03.338001 systemd-networkd[717]: lo: Link UP Jul 15 11:29:03.338008 systemd-networkd[717]: lo: Gained carrier Jul 15 11:29:03.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.338354 systemd-networkd[717]: Enumeration completed Jul 15 11:29:03.338530 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:29:03.338740 systemd[1]: Started systemd-networkd.service. Jul 15 11:29:03.339378 systemd-networkd[717]: eth0: Link UP Jul 15 11:29:03.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.339380 systemd-networkd[717]: eth0: Gained carrier Jul 15 11:29:03.340631 systemd[1]: Reached target network.target. Jul 15 11:29:03.342026 systemd[1]: Starting iscsiuio.service... Jul 15 11:29:03.351244 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:29:03.351244 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 15 11:29:03.351244 iscsid[724]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:29:03.351244 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:29:03.351244 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:29:03.351244 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:29:03.351244 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:29:03.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.345689 systemd[1]: Started iscsiuio.service. Jul 15 11:29:03.348070 systemd[1]: Starting iscsid.service... Jul 15 11:29:03.351471 systemd[1]: Started iscsid.service. Jul 15 11:29:03.354178 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:29:03.361615 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:29:03.362150 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:29:03.362283 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:29:03.362466 systemd[1]: Reached target remote-fs.target. Jul 15 11:29:03.363290 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:29:03.369055 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:29:03.401910 ignition[646]: parsing config with SHA512: 1d6695541157efa77f3d7e29b5b674b3d169d957e0188b212b24d35465acb7cf88aeb398bc7eaa4987cb3aef5104cdd9770f6413f54ba8106d0a00ff99c65d09 Jul 15 11:29:03.408769 unknown[646]: fetched base config from "system" Jul 15 11:29:03.409110 unknown[646]: fetched user config from "qemu" Jul 15 11:29:03.409522 ignition[646]: fetch-offline: fetch-offline passed Jul 15 11:29:03.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.410348 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:29:03.409568 ignition[646]: Ignition finished successfully Jul 15 11:29:03.411439 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:29:03.412041 systemd[1]: Starting ignition-kargs.service... Jul 15 11:29:03.412197 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:29:03.420363 ignition[738]: Ignition 2.14.0 Jul 15 11:29:03.420372 ignition[738]: Stage: kargs Jul 15 11:29:03.420451 ignition[738]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:03.420459 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:03.422882 systemd[1]: Finished ignition-kargs.service. Jul 15 11:29:03.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.421378 ignition[738]: kargs: kargs passed Jul 15 11:29:03.425306 systemd[1]: Starting ignition-disks.service... Jul 15 11:29:03.421410 ignition[738]: Ignition finished successfully Jul 15 11:29:03.430897 ignition[744]: Ignition 2.14.0 Jul 15 11:29:03.430908 ignition[744]: Stage: disks Jul 15 11:29:03.430995 ignition[744]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:03.431004 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:03.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.432487 systemd[1]: Finished ignition-disks.service. Jul 15 11:29:03.431996 ignition[744]: disks: disks passed Jul 15 11:29:03.434046 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:29:03.432029 ignition[744]: Ignition finished successfully Jul 15 11:29:03.435846 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:29:03.436700 systemd[1]: Reached target local-fs.target. Jul 15 11:29:03.438176 systemd[1]: Reached target sysinit.target. Jul 15 11:29:03.439594 systemd[1]: Reached target basic.target. Jul 15 11:29:03.441853 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:29:03.451115 systemd-fsck[752]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:29:03.456254 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:29:03.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.459050 systemd[1]: Mounting sysroot.mount... Jul 15 11:29:03.465661 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:29:03.465817 systemd[1]: Mounted sysroot.mount. Jul 15 11:29:03.467145 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:29:03.469515 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:29:03.471091 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:29:03.471126 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:29:03.471143 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:29:03.476312 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:29:03.478308 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:29:03.481870 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:29:03.485533 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:29:03.488811 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:29:03.492034 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:29:03.514809 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:29:03.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.515710 systemd[1]: Starting ignition-mount.service... Jul 15 11:29:03.517318 systemd[1]: Starting sysroot-boot.service... Jul 15 11:29:03.521534 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:29:03.528325 ignition[805]: INFO : Ignition 2.14.0 Jul 15 11:29:03.528325 ignition[805]: INFO : Stage: mount Jul 15 11:29:03.529836 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:03.529836 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:03.532685 systemd[1]: Finished sysroot-boot.service. Jul 15 11:29:03.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:03.534093 ignition[805]: INFO : mount: mount passed Jul 15 11:29:03.534821 ignition[805]: INFO : Ignition finished successfully Jul 15 11:29:03.536074 systemd[1]: Finished ignition-mount.service. Jul 15 11:29:03.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:04.203004 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:29:04.209694 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Jul 15 11:29:04.209731 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:29:04.209741 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:29:04.210752 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:29:04.215526 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:29:04.216811 systemd[1]: Starting ignition-files.service... Jul 15 11:29:04.231778 ignition[834]: INFO : Ignition 2.14.0 Jul 15 11:29:04.231778 ignition[834]: INFO : Stage: files Jul 15 11:29:04.233694 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:04.233694 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:04.236955 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:29:04.238969 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:29:04.238969 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:29:04.243345 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:29:04.244803 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:29:04.246583 unknown[834]: wrote ssh authorized keys file for user: core Jul 15 11:29:04.247653 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:29:04.249377 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:29:04.251256 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:29:04.253114 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:29:04.255048 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 11:29:04.307431 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:29:04.636198 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:29:04.636198 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:29:04.640147 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 11:29:05.166448 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:29:05.292773 systemd-networkd[717]: eth0: Gained IPv6LL Jul 15 11:29:06.119369 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:29:06.119369 ignition[834]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:29:06.123243 ignition[834]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:29:06.162524 ignition[834]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:29:06.164084 ignition[834]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:29:06.164084 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:29:06.164084 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:29:06.164084 ignition[834]: INFO : files: files passed Jul 15 11:29:06.164084 ignition[834]: INFO : Ignition finished successfully Jul 15 11:29:06.170910 systemd[1]: Finished ignition-files.service. Jul 15 11:29:06.176948 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 15 11:29:06.176981 kernel: audit: type=1130 audit(1752578946.170:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.171896 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:29:06.176914 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:29:06.181416 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:29:06.185684 kernel: audit: type=1130 audit(1752578946.180:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.177460 systemd[1]: Starting ignition-quench.service... Jul 15 11:29:06.193444 kernel: audit: type=1130 audit(1752578946.185:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.193462 kernel: audit: type=1131 audit(1752578946.185:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.193570 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:29:06.178826 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:29:06.181602 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:29:06.181677 systemd[1]: Finished ignition-quench.service. Jul 15 11:29:06.186524 systemd[1]: Reached target ignition-complete.target. Jul 15 11:29:06.194030 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:29:06.204465 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:29:06.204550 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:29:06.213096 kernel: audit: type=1130 audit(1752578946.205:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.213113 kernel: audit: type=1131 audit(1752578946.205:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.206251 systemd[1]: Reached target initrd-fs.target. Jul 15 11:29:06.213103 systemd[1]: Reached target initrd.target. Jul 15 11:29:06.213862 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:29:06.214477 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:29:06.223185 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:29:06.228037 kernel: audit: type=1130 audit(1752578946.223:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.224578 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:29:06.232553 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:29:06.233468 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:29:06.234969 systemd[1]: Stopped target timers.target. Jul 15 11:29:06.236529 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:29:06.242211 kernel: audit: type=1131 audit(1752578946.237:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.236651 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:29:06.238067 systemd[1]: Stopped target initrd.target. Jul 15 11:29:06.242284 systemd[1]: Stopped target basic.target. Jul 15 11:29:06.243735 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:29:06.245209 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:29:06.246743 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:29:06.248309 systemd[1]: Stopped target remote-fs.target. Jul 15 11:29:06.249862 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:29:06.251423 systemd[1]: Stopped target sysinit.target. Jul 15 11:29:06.252845 systemd[1]: Stopped target local-fs.target. Jul 15 11:29:06.254295 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:29:06.255768 systemd[1]: Stopped target swap.target. Jul 15 11:29:06.262697 kernel: audit: type=1131 audit(1752578946.257:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.257106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:29:06.257191 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:29:06.268596 kernel: audit: type=1131 audit(1752578946.263:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.258703 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:29:06.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.262757 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:29:06.262840 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:29:06.264453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:29:06.264548 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:29:06.268748 systemd[1]: Stopped target paths.target. Jul 15 11:29:06.270073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:29:06.273678 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:29:06.275039 systemd[1]: Stopped target slices.target. Jul 15 11:29:06.276726 systemd[1]: Stopped target sockets.target. Jul 15 11:29:06.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.278213 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:29:06.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.278302 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:29:06.284354 iscsid[724]: iscsid shutting down. Jul 15 11:29:06.279832 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:29:06.279912 systemd[1]: Stopped ignition-files.service. Jul 15 11:29:06.281809 systemd[1]: Stopping ignition-mount.service... Jul 15 11:29:06.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.289589 ignition[874]: INFO : Ignition 2.14.0 Jul 15 11:29:06.289589 ignition[874]: INFO : Stage: umount Jul 15 11:29:06.289589 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:29:06.289589 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:29:06.289589 ignition[874]: INFO : umount: umount passed Jul 15 11:29:06.289589 ignition[874]: INFO : Ignition finished successfully Jul 15 11:29:06.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.283002 systemd[1]: Stopping iscsid.service... Jul 15 11:29:06.286099 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:29:06.287082 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:29:06.287253 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:29:06.289549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:29:06.289649 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:29:06.302311 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:29:06.303693 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:29:06.304523 systemd[1]: Stopped iscsid.service. Jul 15 11:29:06.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.306115 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:29:06.307101 systemd[1]: Stopped ignition-mount.service. Jul 15 11:29:06.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.308917 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:29:06.309754 systemd[1]: Closed iscsid.socket. Jul 15 11:29:06.311089 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:29:06.311126 systemd[1]: Stopped ignition-disks.service. Jul 15 11:29:06.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.313351 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:29:06.313381 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:29:06.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.315657 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:29:06.315688 systemd[1]: Stopped ignition-setup.service. Jul 15 11:29:06.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.317954 systemd[1]: Stopping iscsiuio.service... Jul 15 11:29:06.319409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:29:06.319477 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:29:06.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.322023 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:29:06.322902 systemd[1]: Stopped iscsiuio.service. Jul 15 11:29:06.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.324981 systemd[1]: Stopped target network.target. Jul 15 11:29:06.326368 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:29:06.326394 systemd[1]: Closed iscsiuio.socket. Jul 15 11:29:06.328545 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:29:06.330110 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:29:06.335670 systemd-networkd[717]: eth0: DHCPv6 lease lost Jul 15 11:29:06.336806 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:29:06.337750 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:29:06.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.340707 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:29:06.341629 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:29:06.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.343739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:29:06.343768 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:29:06.346732 systemd[1]: Stopping network-cleanup.service... Jul 15 11:29:06.346000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:29:06.346000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:29:06.348341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:29:06.348387 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:29:06.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.351012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:29:06.351054 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:29:06.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.353527 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:29:06.353571 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:29:06.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.356193 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:29:06.358422 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:29:06.361613 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:29:06.362647 systemd[1]: Stopped network-cleanup.service. Jul 15 11:29:06.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.365049 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:29:06.366100 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:29:06.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.367891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:29:06.367926 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:29:06.370293 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:29:06.370323 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:29:06.372650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:29:06.372686 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:29:06.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.374989 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:29:06.375019 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:29:06.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.377277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:29:06.377307 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:29:06.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.380218 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:29:06.382926 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 11:29:06.382968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 15 11:29:06.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.385627 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:29:06.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.385673 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:29:06.387331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:29:06.387362 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:29:06.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.391118 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 11:29:06.392773 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:29:06.393674 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:29:06.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.395168 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:29:06.396205 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:29:06.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.397979 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:29:06.399561 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:29:06.399596 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:29:06.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:06.402480 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:29:06.407409 systemd[1]: Switching root. Jul 15 11:29:06.408000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:29:06.408000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:29:06.410000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:29:06.410000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:29:06.411000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:29:06.426265 systemd-journald[198]: Journal stopped Jul 15 11:29:08.990802 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 15 11:29:08.990848 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:29:08.990863 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:29:08.990872 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:29:08.990885 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:29:08.990894 kernel: SELinux: policy capability open_perms=1 Jul 15 11:29:08.990907 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:29:08.990916 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:29:08.990930 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:29:08.990939 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:29:08.990948 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:29:08.990957 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:29:08.990969 systemd[1]: Successfully loaded SELinux policy in 37.146ms. Jul 15 11:29:08.990981 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.500ms. Jul 15 11:29:08.990993 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:29:08.991004 systemd[1]: Detected virtualization kvm. Jul 15 11:29:08.991013 systemd[1]: Detected architecture x86-64. Jul 15 11:29:08.991024 systemd[1]: Detected first boot. Jul 15 11:29:08.991035 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:29:08.991045 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:29:08.991055 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:29:08.991065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:08.991076 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:08.991087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:08.991098 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:29:08.991109 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:29:08.991120 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:29:08.991130 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:29:08.991140 systemd[1]: Created slice system-getty.slice. Jul 15 11:29:08.991151 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:29:08.991161 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:29:08.991170 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:29:08.991180 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:29:08.991190 systemd[1]: Created slice user.slice. Jul 15 11:29:08.991203 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:29:08.991213 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:29:08.991224 systemd[1]: Set up automount boot.automount. Jul 15 11:29:08.991234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:29:08.991246 systemd[1]: Reached target integritysetup.target. Jul 15 11:29:08.991256 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:29:08.991266 systemd[1]: Reached target remote-fs.target. Jul 15 11:29:08.991276 systemd[1]: Reached target slices.target. Jul 15 11:29:08.991286 systemd[1]: Reached target swap.target. Jul 15 11:29:08.991296 systemd[1]: Reached target torcx.target. Jul 15 11:29:08.991307 systemd[1]: Reached target veritysetup.target. Jul 15 11:29:08.991317 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:29:08.991327 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:29:08.991337 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:29:08.991346 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:29:08.991356 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:29:08.991368 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:29:08.991378 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:29:08.991387 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:29:08.991398 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:29:08.991408 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:29:08.991418 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:29:08.991429 systemd[1]: Mounting media.mount... Jul 15 11:29:08.991439 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:08.991449 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:29:08.991468 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:29:08.991479 systemd[1]: Mounting tmp.mount... Jul 15 11:29:08.991490 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:29:08.991501 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:29:08.991511 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:29:08.991521 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:29:08.991531 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:29:08.991541 systemd[1]: Starting modprobe@drm.service... Jul 15 11:29:08.991551 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:29:08.991561 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:29:08.991571 systemd[1]: Starting modprobe@loop.service... Jul 15 11:29:08.991581 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:29:08.991594 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 15 11:29:08.991604 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 15 11:29:08.991613 systemd[1]: Starting systemd-journald.service... Jul 15 11:29:08.991623 kernel: loop: module loaded Jul 15 11:29:08.991632 kernel: fuse: init (API version 7.34) Jul 15 11:29:08.991667 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:29:08.991677 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:29:08.991688 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:29:08.991698 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:29:08.991710 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:08.991720 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:29:08.991730 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:29:08.991740 systemd[1]: Mounted media.mount. Jul 15 11:29:08.991749 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:29:08.991762 systemd-journald[1015]: Journal started Jul 15 11:29:08.991801 systemd-journald[1015]: Runtime Journal (/run/log/journal/eb0598515c494edaa793fc40f6d3a2a9) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:29:08.909000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:29:08.909000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 15 11:29:08.989000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:29:08.989000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd6fe96fc0 a2=4000 a3=7ffd6fe9705c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:08.989000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:29:08.994101 systemd[1]: Started systemd-journald.service. Jul 15 11:29:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:08.994737 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:29:08.995886 systemd[1]: Mounted tmp.mount. Jul 15 11:29:08.996994 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:29:08.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:08.998100 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:29:08.998293 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:29:08.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:08.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:08.999530 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:29:08.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.000568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:29:09.000768 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:29:09.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.001956 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:29:09.002152 systemd[1]: Finished modprobe@drm.service. Jul 15 11:29:09.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.003226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:29:09.003418 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:29:09.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.004534 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:29:09.004823 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:29:09.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.005811 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:29:09.006055 systemd[1]: Finished modprobe@loop.service. Jul 15 11:29:09.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.007283 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:29:09.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.008686 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:29:09.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.009944 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:29:09.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.011178 systemd[1]: Reached target network-pre.target. Jul 15 11:29:09.013156 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:29:09.015069 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:29:09.015823 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:29:09.017223 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:29:09.019153 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:29:09.020218 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:29:09.021192 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:29:09.022171 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:29:09.022446 systemd-journald[1015]: Time spent on flushing to /var/log/journal/eb0598515c494edaa793fc40f6d3a2a9 is 18.890ms for 1040 entries. Jul 15 11:29:09.022446 systemd-journald[1015]: System Journal (/var/log/journal/eb0598515c494edaa793fc40f6d3a2a9) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:29:09.054384 systemd-journald[1015]: Received client request to flush runtime journal. Jul 15 11:29:09.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.023093 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:29:09.025674 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:29:09.029017 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:29:09.030097 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:29:09.055009 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 15 11:29:09.031275 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:29:09.032418 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:29:09.033775 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:29:09.035589 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:29:09.047254 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:29:09.048354 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:29:09.050179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:29:09.055107 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:29:09.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.065138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:29:09.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.470159 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:29:09.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.472209 systemd[1]: Starting systemd-udevd.service... Jul 15 11:29:09.486902 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Jul 15 11:29:09.497972 systemd[1]: Started systemd-udevd.service. Jul 15 11:29:09.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.500207 systemd[1]: Starting systemd-networkd.service... Jul 15 11:29:09.507849 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:29:09.537703 systemd[1]: Found device dev-ttyS0.device. Jul 15 11:29:09.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.545299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:29:09.546407 systemd[1]: Started systemd-userdbd.service. Jul 15 11:29:09.561659 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:29:09.565752 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:29:09.581000 audit[1081]: AVC avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:29:09.581000 audit[1081]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f04a2f78c0 a1=338ac a2=7f77e40f6bc5 a3=5 items=110 ppid=1069 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:09.581000 audit: CWD cwd="/" Jul 15 11:29:09.581000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=1 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=2 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=3 name=(null) inode=1935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=4 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=5 name=(null) inode=1936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=6 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=7 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=8 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=9 name=(null) inode=1938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=10 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=11 name=(null) inode=1939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=12 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=13 name=(null) inode=1940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=14 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=15 name=(null) inode=1941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=16 name=(null) inode=1937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=17 name=(null) inode=1942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=18 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=19 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=20 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=21 name=(null) inode=1944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=22 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=23 name=(null) inode=1945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=24 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.588483 systemd-networkd[1077]: lo: Link UP Jul 15 11:29:09.588681 systemd-networkd[1077]: lo: Gained carrier Jul 15 11:29:09.589112 systemd-networkd[1077]: Enumeration completed Jul 15 11:29:09.589246 systemd[1]: Started systemd-networkd.service. Jul 15 11:29:09.590675 systemd-networkd[1077]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:29:09.591588 systemd-networkd[1077]: eth0: Link UP Jul 15 11:29:09.591666 systemd-networkd[1077]: eth0: Gained carrier Jul 15 11:29:09.581000 audit: PATH item=25 name=(null) inode=1946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=26 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=27 name=(null) inode=1947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=28 name=(null) inode=1943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=29 name=(null) inode=1948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=30 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=31 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=32 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=33 name=(null) inode=1950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=34 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=35 name=(null) inode=1951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=36 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=37 name=(null) inode=1952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=38 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=39 name=(null) inode=1953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=40 name=(null) inode=1949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=41 name=(null) inode=1954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=42 name=(null) inode=1934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=43 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=44 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=45 name=(null) inode=1956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=46 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=47 name=(null) inode=1957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=48 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=49 name=(null) inode=1958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=50 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=51 name=(null) inode=1959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.594665 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:29:09.581000 audit: PATH item=52 name=(null) inode=1955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=53 name=(null) inode=1960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=55 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=56 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=57 name=(null) inode=1962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=58 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=59 name=(null) inode=1963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=60 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=61 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=62 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=63 name=(null) inode=1965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=64 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=65 name=(null) inode=1966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=66 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=67 name=(null) inode=1967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=68 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=69 name=(null) inode=1968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=70 name=(null) inode=1964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=71 name=(null) inode=1969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=72 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=73 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=74 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=75 name=(null) inode=1971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=76 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=77 name=(null) inode=1972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=78 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=79 name=(null) inode=1973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=80 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=81 name=(null) inode=1974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=82 name=(null) inode=1970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=83 name=(null) inode=1975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=84 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=85 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=86 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=87 name=(null) inode=1977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=88 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=89 name=(null) inode=1978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.581000 audit: PATH item=90 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=91 name=(null) inode=1979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=92 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=93 name=(null) inode=1980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=94 name=(null) inode=1976 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=95 name=(null) inode=1981 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=96 name=(null) inode=1961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=97 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=98 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=99 name=(null) inode=1983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=100 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=101 name=(null) inode=1984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=102 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=103 name=(null) inode=1985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=104 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=105 name=(null) inode=1986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=106 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=107 name=(null) inode=1987 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PATH item=109 name=(null) inode=1988 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:29:09.581000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:29:09.603770 systemd-networkd[1077]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:29:09.609030 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:29:09.610786 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:29:09.610900 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:29:09.614665 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:29:09.664108 kernel: kvm: Nested Virtualization enabled Jul 15 11:29:09.664187 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:29:09.664202 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:29:09.664214 kernel: SVM: Virtual GIF supported Jul 15 11:29:09.679666 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:29:09.702070 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:29:09.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.704133 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:29:09.711544 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:29:09.740408 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:29:09.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.741436 systemd[1]: Reached target cryptsetup.target. Jul 15 11:29:09.743216 systemd[1]: Starting lvm2-activation.service... Jul 15 11:29:09.746945 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:29:09.779386 systemd[1]: Finished lvm2-activation.service. Jul 15 11:29:09.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.780253 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:29:09.781092 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:29:09.781110 systemd[1]: Reached target local-fs.target. Jul 15 11:29:09.781888 systemd[1]: Reached target machines.target. Jul 15 11:29:09.783574 systemd[1]: Starting ldconfig.service... Jul 15 11:29:09.784482 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:29:09.784518 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:09.785399 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:29:09.786872 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:29:09.788977 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:29:09.791326 systemd[1]: Starting systemd-sysext.service... Jul 15 11:29:09.792414 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Jul 15 11:29:09.793330 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:29:09.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.798619 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:29:09.799860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:29:09.803726 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:29:09.803944 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:29:09.811658 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 11:29:09.827400 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Jul 15 11:29:09.827400 systemd-fsck[1120]: /dev/vda1: 790 files, 120725/258078 clusters Jul 15 11:29:09.828560 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:29:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:09.831398 systemd[1]: Mounting boot.mount... Jul 15 11:29:09.845206 systemd[1]: Mounted boot.mount. Jul 15 11:29:10.087025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:29:10.087722 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:29:10.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.091028 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:29:10.092151 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:29:10.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.105657 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 11:29:10.109086 (sd-sysext)[1131]: Using extensions 'kubernetes'. Jul 15 11:29:10.109427 (sd-sysext)[1131]: Merged extensions into '/usr'. Jul 15 11:29:10.123415 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.124683 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:29:10.125784 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.126957 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:29:10.128950 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:29:10.131263 systemd[1]: Starting modprobe@loop.service... Jul 15 11:29:10.132445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.132572 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.132684 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.135734 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:29:10.137066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:29:10.137210 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:29:10.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.138792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:29:10.138945 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:29:10.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.140377 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:29:10.140521 systemd[1]: Finished modprobe@loop.service. Jul 15 11:29:10.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.141951 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:29:10.142038 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.143215 systemd[1]: Finished systemd-sysext.service. Jul 15 11:29:10.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.145296 systemd[1]: Starting ensure-sysext.service... Jul 15 11:29:10.146028 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:29:10.147448 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:29:10.152614 systemd[1]: Reloading. Jul 15 11:29:10.157361 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:29:10.158453 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:29:10.159814 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:29:10.204789 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-15T11:29:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:29:10.205176 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-15T11:29:10Z" level=info msg="torcx already run" Jul 15 11:29:10.278804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:10.278819 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:10.297201 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:10.346234 systemd[1]: Finished ldconfig.service. Jul 15 11:29:10.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.347479 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:29:10.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.351620 systemd[1]: Starting audit-rules.service... Jul 15 11:29:10.353443 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:29:10.355482 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:29:10.357839 systemd[1]: Starting systemd-resolved.service... Jul 15 11:29:10.359900 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:29:10.362590 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:29:10.363893 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:29:10.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.366000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.369568 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.370063 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.371339 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:29:10.373013 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:29:10.374833 systemd[1]: Starting modprobe@loop.service... Jul 15 11:29:10.375545 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.375780 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.375921 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:29:10.376025 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.377254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:29:10.377520 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:29:10.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.378900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:29:10.379018 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:29:10.380000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:29:10.380000 audit[1242]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6ce4f940 a2=420 a3=0 items=0 ppid=1215 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:10.380000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:29:10.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:10.382080 augenrules[1242]: No rules Jul 15 11:29:10.382191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:29:10.382309 systemd[1]: Finished modprobe@loop.service. Jul 15 11:29:10.383514 systemd[1]: Finished audit-rules.service. Jul 15 11:29:10.384627 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:29:10.384870 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.385846 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:29:10.388005 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.388191 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.389524 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:29:10.391847 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:29:10.396064 systemd[1]: Starting modprobe@loop.service... Jul 15 11:29:10.396783 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.396877 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.396961 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:29:10.397022 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.398193 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:29:10.399511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:29:10.399628 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:29:10.400718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:29:10.400827 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:29:10.401942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:29:10.402180 systemd[1]: Finished modprobe@loop.service. Jul 15 11:29:10.403149 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:29:10.403226 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.404516 systemd[1]: Starting systemd-update-done.service... Jul 15 11:29:10.407699 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.407898 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.409180 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:29:10.410944 systemd[1]: Starting modprobe@drm.service... Jul 15 11:29:10.412397 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:29:10.414732 systemd[1]: Starting modprobe@loop.service... Jul 15 11:29:10.415772 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.415908 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.417621 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:29:10.419688 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:29:10.419814 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:29:10.421155 systemd[1]: Finished systemd-update-done.service. Jul 15 11:29:10.422759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:29:10.422897 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:29:10.424207 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:29:10.424331 systemd[1]: Finished modprobe@drm.service. Jul 15 11:29:10.425564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:29:10.425917 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:29:10.427267 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:29:10.427560 systemd[1]: Finished modprobe@loop.service. Jul 15 11:29:10.429025 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:29:10.429141 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.430668 systemd[1]: Finished ensure-sysext.service. Jul 15 11:29:10.441088 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:29:10.442198 systemd[1]: Reached target time-set.target. Jul 15 11:29:10.442787 systemd-resolved[1222]: Positive Trust Anchors: Jul 15 11:29:10.442799 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:29:10.442825 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:29:10.443770 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:29:10.443828 systemd-timesyncd[1223]: Initial clock synchronization to Tue 2025-07-15 11:29:10.197745 UTC. Jul 15 11:29:10.449753 systemd-resolved[1222]: Defaulting to hostname 'linux'. Jul 15 11:29:10.451031 systemd[1]: Started systemd-resolved.service. Jul 15 11:29:10.451903 systemd[1]: Reached target network.target. Jul 15 11:29:10.452731 systemd[1]: Reached target nss-lookup.target. Jul 15 11:29:10.453606 systemd[1]: Reached target sysinit.target. Jul 15 11:29:10.454527 systemd[1]: Started motdgen.path. Jul 15 11:29:10.455292 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:29:10.456566 systemd[1]: Started logrotate.timer. Jul 15 11:29:10.457416 systemd[1]: Started mdadm.timer. Jul 15 11:29:10.458144 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:29:10.459041 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:29:10.459107 systemd[1]: Reached target paths.target. Jul 15 11:29:10.459902 systemd[1]: Reached target timers.target. Jul 15 11:29:10.460979 systemd[1]: Listening on dbus.socket. Jul 15 11:29:10.462817 systemd[1]: Starting docker.socket... Jul 15 11:29:10.464297 systemd[1]: Listening on sshd.socket. Jul 15 11:29:10.465154 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.465390 systemd[1]: Listening on docker.socket. Jul 15 11:29:10.466260 systemd[1]: Reached target sockets.target. Jul 15 11:29:10.467088 systemd[1]: Reached target basic.target. Jul 15 11:29:10.467936 systemd[1]: System is tainted: cgroupsv1 Jul 15 11:29:10.467974 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.467991 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:29:10.468729 systemd[1]: Starting containerd.service... Jul 15 11:29:10.470325 systemd[1]: Starting dbus.service... Jul 15 11:29:10.471731 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:29:10.473492 systemd[1]: Starting extend-filesystems.service... Jul 15 11:29:10.474580 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:29:10.475570 systemd[1]: Starting motdgen.service... Jul 15 11:29:10.476776 jq[1277]: false Jul 15 11:29:10.479558 systemd[1]: Starting prepare-helm.service... Jul 15 11:29:10.482031 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:29:10.483717 systemd[1]: Starting sshd-keygen.service... Jul 15 11:29:10.486300 systemd[1]: Starting systemd-logind.service... Jul 15 11:29:10.487229 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:29:10.487283 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:29:10.488205 systemd[1]: Starting update-engine.service... Jul 15 11:29:10.490155 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:29:10.492560 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:29:10.494495 extend-filesystems[1278]: Found loop1 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found sr0 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda1 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda2 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda3 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found usr Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda4 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda6 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda7 Jul 15 11:29:10.512746 extend-filesystems[1278]: Found vda9 Jul 15 11:29:10.512746 extend-filesystems[1278]: Checking size of /dev/vda9 Jul 15 11:29:10.512746 extend-filesystems[1278]: Resized partition /dev/vda9 Jul 15 11:29:10.536929 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:29:10.536986 jq[1296]: true Jul 15 11:29:10.500492 dbus-daemon[1275]: [system] SELinux support is enabled Jul 15 11:29:10.496202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:29:10.537316 update_engine[1293]: I0715 11:29:10.520613 1293 main.cc:92] Flatcar Update Engine starting Jul 15 11:29:10.537316 update_engine[1293]: I0715 11:29:10.524796 1293 update_check_scheduler.cc:74] Next update check in 12m0s Jul 15 11:29:10.537489 extend-filesystems[1311]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:29:10.497234 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:29:10.538845 tar[1300]: linux-amd64/helm Jul 15 11:29:10.497452 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:29:10.539105 jq[1302]: true Jul 15 11:29:10.500624 systemd[1]: Started dbus.service. Jul 15 11:29:10.515397 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:29:10.515651 systemd[1]: Finished motdgen.service. Jul 15 11:29:10.522397 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:29:10.522413 systemd[1]: Reached target system-config.target. Jul 15 11:29:10.524873 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:29:10.524886 systemd[1]: Reached target user-config.target. Jul 15 11:29:10.526671 systemd[1]: Started update-engine.service. Jul 15 11:29:10.531431 systemd[1]: Started locksmithd.service. Jul 15 11:29:10.549669 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:29:10.573532 env[1313]: time="2025-07-15T11:29:10.573489750Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:29:10.574869 extend-filesystems[1311]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:29:10.574869 extend-filesystems[1311]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:29:10.574869 extend-filesystems[1311]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:29:10.579875 extend-filesystems[1278]: Resized filesystem in /dev/vda9 Jul 15 11:29:10.577913 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:29:10.588687 bash[1334]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:29:10.578133 systemd[1]: Finished extend-filesystems.service. Jul 15 11:29:10.579320 systemd-logind[1289]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:29:10.579335 systemd-logind[1289]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:29:10.584412 systemd-logind[1289]: New seat seat0. Jul 15 11:29:10.586597 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:29:10.592924 systemd[1]: Started systemd-logind.service. Jul 15 11:29:10.602516 env[1313]: time="2025-07-15T11:29:10.602433049Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:29:10.603183 env[1313]: time="2025-07-15T11:29:10.603045227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604144 env[1313]: time="2025-07-15T11:29:10.604113450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604144 env[1313]: time="2025-07-15T11:29:10.604140049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604356 env[1313]: time="2025-07-15T11:29:10.604334694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604356 env[1313]: time="2025-07-15T11:29:10.604352979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604471 env[1313]: time="2025-07-15T11:29:10.604364651Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:29:10.604471 env[1313]: time="2025-07-15T11:29:10.604373708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604471 env[1313]: time="2025-07-15T11:29:10.604448738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604649 env[1313]: time="2025-07-15T11:29:10.604615551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604776 env[1313]: time="2025-07-15T11:29:10.604754031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:29:10.604776 env[1313]: time="2025-07-15T11:29:10.604771033Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:29:10.604850 env[1313]: time="2025-07-15T11:29:10.604811539Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:29:10.604850 env[1313]: time="2025-07-15T11:29:10.604821167Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.609919716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.609942229Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.609953740Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.609977505Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.609989297Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610002281Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610013352Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610025795Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610037177Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610048848Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610060450Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610071170Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610141292Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:29:10.610382 env[1313]: time="2025-07-15T11:29:10.610200012Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610846865Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610872954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610884175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610920934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610931173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610941933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610952092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610961961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610972971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610983421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.610992628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.611005552Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.611104077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.611116410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.614544 env[1313]: time="2025-07-15T11:29:10.611126740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.611585 locksmithd[1324]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:29:10.615064 env[1313]: time="2025-07-15T11:29:10.611137540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:29:10.615064 env[1313]: time="2025-07-15T11:29:10.611150083Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:29:10.615064 env[1313]: time="2025-07-15T11:29:10.611159932Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:29:10.615064 env[1313]: time="2025-07-15T11:29:10.611175481Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:29:10.615064 env[1313]: time="2025-07-15T11:29:10.611204054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:29:10.612250 systemd[1]: Started containerd.service. Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.611378091Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.611435488Z" level=info msg="Connect containerd service" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.611461427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.611913655Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.612075458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.612104763Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.612140991Z" level=info msg="containerd successfully booted in 0.053147s" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613675759Z" level=info msg="Start subscribing containerd event" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613713299Z" level=info msg="Start recovering state" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613754326Z" level=info msg="Start event monitor" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613765738Z" level=info msg="Start snapshots syncer" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613772961Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:29:10.615228 env[1313]: time="2025-07-15T11:29:10.613778812Z" level=info msg="Start streaming server" Jul 15 11:29:10.733739 systemd-networkd[1077]: eth0: Gained IPv6LL Jul 15 11:29:10.736039 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:29:10.737281 systemd[1]: Reached target network-online.target. Jul 15 11:29:10.739538 systemd[1]: Starting kubelet.service... Jul 15 11:29:10.780109 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:29:10.797085 systemd[1]: Finished sshd-keygen.service. Jul 15 11:29:10.799152 systemd[1]: Starting issuegen.service... Jul 15 11:29:10.804803 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:29:10.805019 systemd[1]: Finished issuegen.service. Jul 15 11:29:10.807173 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:29:10.812904 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:29:10.814901 systemd[1]: Started getty@tty1.service. Jul 15 11:29:10.816605 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:29:10.818027 systemd[1]: Reached target getty.target. Jul 15 11:29:10.919336 tar[1300]: linux-amd64/LICENSE Jul 15 11:29:10.919451 tar[1300]: linux-amd64/README.md Jul 15 11:29:10.923546 systemd[1]: Finished prepare-helm.service. Jul 15 11:29:11.361836 systemd[1]: Started kubelet.service. Jul 15 11:29:11.363357 systemd[1]: Reached target multi-user.target. Jul 15 11:29:11.365513 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:29:11.371505 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:29:11.371780 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:29:11.373724 systemd[1]: Startup finished in 6.361s (kernel) + 4.908s (userspace) = 11.270s. Jul 15 11:29:11.735300 kubelet[1379]: E0715 11:29:11.735188 1379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:29:11.736995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:29:11.737135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:29:13.483007 systemd[1]: Created slice system-sshd.slice. Jul 15 11:29:13.483960 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:48810.service. Jul 15 11:29:13.521777 sshd[1389]: Accepted publickey for core from 10.0.0.1 port 48810 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:13.523169 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.529953 systemd[1]: Created slice user-500.slice. Jul 15 11:29:13.530716 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:29:13.532290 systemd-logind[1289]: New session 1 of user core. Jul 15 11:29:13.538265 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:29:13.539319 systemd[1]: Starting user@500.service... Jul 15 11:29:13.542181 (systemd)[1394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.607600 systemd[1394]: Queued start job for default target default.target. Jul 15 11:29:13.607839 systemd[1394]: Reached target paths.target. Jul 15 11:29:13.607861 systemd[1394]: Reached target sockets.target. Jul 15 11:29:13.607875 systemd[1394]: Reached target timers.target. Jul 15 11:29:13.607884 systemd[1394]: Reached target basic.target. Jul 15 11:29:13.607925 systemd[1394]: Reached target default.target. Jul 15 11:29:13.607945 systemd[1394]: Startup finished in 60ms. Jul 15 11:29:13.608086 systemd[1]: Started user@500.service. Jul 15 11:29:13.609024 systemd[1]: Started session-1.scope. Jul 15 11:29:13.659577 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:48822.service. Jul 15 11:29:13.695359 sshd[1403]: Accepted publickey for core from 10.0.0.1 port 48822 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:13.696373 sshd[1403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.699911 systemd-logind[1289]: New session 2 of user core. Jul 15 11:29:13.700531 systemd[1]: Started session-2.scope. Jul 15 11:29:13.752780 sshd[1403]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:13.755013 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:48830.service. Jul 15 11:29:13.755383 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:48822.service: Deactivated successfully. Jul 15 11:29:13.756315 systemd-logind[1289]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:29:13.756466 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:29:13.757288 systemd-logind[1289]: Removed session 2. Jul 15 11:29:13.789473 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 48830 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:13.790490 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.793585 systemd-logind[1289]: New session 3 of user core. Jul 15 11:29:13.794212 systemd[1]: Started session-3.scope. Jul 15 11:29:13.841995 sshd[1408]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:13.843904 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:48846.service. Jul 15 11:29:13.844533 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:48830.service: Deactivated successfully. Jul 15 11:29:13.845210 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:29:13.845556 systemd-logind[1289]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:29:13.846243 systemd-logind[1289]: Removed session 3. Jul 15 11:29:13.878052 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 48846 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:13.878990 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.881775 systemd-logind[1289]: New session 4 of user core. Jul 15 11:29:13.882350 systemd[1]: Started session-4.scope. Jul 15 11:29:13.933916 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:13.935668 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:48860.service. Jul 15 11:29:13.936632 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:48846.service: Deactivated successfully. Jul 15 11:29:13.937271 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:29:13.937393 systemd-logind[1289]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:29:13.938312 systemd-logind[1289]: Removed session 4. Jul 15 11:29:13.970264 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 48860 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:13.971146 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:13.974148 systemd-logind[1289]: New session 5 of user core. Jul 15 11:29:13.974762 systemd[1]: Started session-5.scope. Jul 15 11:29:14.027708 sudo[1428]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 11:29:14.027877 sudo[1428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:29:14.037857 dbus-daemon[1275]: \xd0\u001d\xed\xbc\x93U: received setenforce notice (enforcing=-1989593584) Jul 15 11:29:14.040593 sudo[1428]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:14.042436 sshd[1422]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:14.044688 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:48866.service. Jul 15 11:29:14.045772 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:48860.service: Deactivated successfully. Jul 15 11:29:14.046886 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:29:14.046997 systemd-logind[1289]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:29:14.047907 systemd-logind[1289]: Removed session 5. Jul 15 11:29:14.081447 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 48866 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:14.082602 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:14.085772 systemd-logind[1289]: New session 6 of user core. Jul 15 11:29:14.086439 systemd[1]: Started session-6.scope. Jul 15 11:29:14.138969 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 11:29:14.139209 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:29:14.141776 sudo[1437]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:14.146182 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 15 11:29:14.146365 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:29:14.154710 systemd[1]: Stopping audit-rules.service... Jul 15 11:29:14.155000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:29:14.156006 auditctl[1440]: No rules Jul 15 11:29:14.156333 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 11:29:14.156532 systemd[1]: Stopped audit-rules.service. Jul 15 11:29:14.156665 kernel: kauditd_printk_skb: 220 callbacks suppressed Jul 15 11:29:14.156690 kernel: audit: type=1305 audit(1752578954.155:141): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:29:14.157796 systemd[1]: Starting audit-rules.service... Jul 15 11:29:14.155000 audit[1440]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff652c8140 a2=420 a3=0 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:14.164586 kernel: audit: type=1300 audit(1752578954.155:141): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff652c8140 a2=420 a3=0 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:14.164651 kernel: audit: type=1327 audit(1752578954.155:141): proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:29:14.164667 kernel: audit: type=1131 audit(1752578954.155:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.155000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:29:14.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.172817 augenrules[1458]: No rules Jul 15 11:29:14.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.173513 systemd[1]: Finished audit-rules.service. Jul 15 11:29:14.174677 sudo[1436]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:14.173000 audit[1436]: USER_END pid=1436 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.177551 sshd[1430]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:14.179604 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:48868.service. Jul 15 11:29:14.179990 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:48866.service: Deactivated successfully. Jul 15 11:29:14.181125 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:29:14.182111 kernel: audit: type=1130 audit(1752578954.172:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.182150 kernel: audit: type=1106 audit(1752578954.173:144): pid=1436 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.182166 kernel: audit: type=1104 audit(1752578954.173:145): pid=1436 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.173000 audit[1436]: CRED_DISP pid=1436 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.181520 systemd-logind[1289]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:29:14.182681 systemd-logind[1289]: Removed session 6. Jul 15 11:29:14.176000 audit[1430]: USER_END pid=1430 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.190250 kernel: audit: type=1106 audit(1752578954.176:146): pid=1430 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.176000 audit[1430]: CRED_DISP pid=1430 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.41:22-10.0.0.1:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.41:22-10.0.0.1:48866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.191684 kernel: audit: type=1104 audit(1752578954.176:147): pid=1430 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.191733 kernel: audit: type=1130 audit(1752578954.178:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.41:22-10.0.0.1:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.216000 audit[1463]: USER_ACCT pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.217962 sshd[1463]: Accepted publickey for core from 10.0.0.1 port 48868 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:29:14.217000 audit[1463]: CRED_ACQ pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.217000 audit[1463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff991e7500 a2=3 a3=0 items=0 ppid=1 pid=1463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:14.217000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:29:14.218976 sshd[1463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:14.222006 systemd-logind[1289]: New session 7 of user core. Jul 15 11:29:14.222619 systemd[1]: Started session-7.scope. Jul 15 11:29:14.224000 audit[1463]: USER_START pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.225000 audit[1469]: CRED_ACQ pid=1469 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:14.270000 audit[1470]: USER_ACCT pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.271000 audit[1470]: CRED_REFR pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.272579 sudo[1470]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:29:14.272778 sudo[1470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:29:14.272000 audit[1470]: USER_START pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:14.290996 systemd[1]: Starting docker.service... Jul 15 11:29:14.324199 env[1482]: time="2025-07-15T11:29:14.324150281Z" level=info msg="Starting up" Jul 15 11:29:14.325379 env[1482]: time="2025-07-15T11:29:14.325357691Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:29:14.325445 env[1482]: time="2025-07-15T11:29:14.325428429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:29:14.325523 env[1482]: time="2025-07-15T11:29:14.325504525Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:29:14.325589 env[1482]: time="2025-07-15T11:29:14.325571838Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:29:14.326964 env[1482]: time="2025-07-15T11:29:14.326949307Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:29:14.327030 env[1482]: time="2025-07-15T11:29:14.327014059Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:29:14.327099 env[1482]: time="2025-07-15T11:29:14.327081412Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:29:14.327164 env[1482]: time="2025-07-15T11:29:14.327147017Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:29:15.068840 env[1482]: time="2025-07-15T11:29:15.068799237Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 15 11:29:15.068840 env[1482]: time="2025-07-15T11:29:15.068824128Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 15 11:29:15.069023 env[1482]: time="2025-07-15T11:29:15.068968513Z" level=info msg="Loading containers: start." Jul 15 11:29:15.114000 audit[1516]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.114000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff4c9a2e40 a2=0 a3=7fff4c9a2e2c items=0 ppid=1482 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.114000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 15 11:29:15.116000 audit[1518]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.116000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdf27f9430 a2=0 a3=7ffdf27f941c items=0 ppid=1482 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.116000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 15 11:29:15.117000 audit[1520]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.117000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe679c93a0 a2=0 a3=7ffe679c938c items=0 ppid=1482 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:29:15.118000 audit[1522]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.118000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0673ff10 a2=0 a3=7ffc0673fefc items=0 ppid=1482 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.118000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:29:15.120000 audit[1524]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.120000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffee7491a50 a2=0 a3=7ffee7491a3c items=0 ppid=1482 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.120000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 15 11:29:15.137000 audit[1529]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.137000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0c47cd20 a2=0 a3=7ffc0c47cd0c items=0 ppid=1482 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.137000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 15 11:29:15.145000 audit[1531]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.145000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff452c5ef0 a2=0 a3=7fff452c5edc items=0 ppid=1482 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.145000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 15 11:29:15.146000 audit[1533]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.146000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd83d46200 a2=0 a3=7ffd83d461ec items=0 ppid=1482 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 15 11:29:15.149000 audit[1535]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.149000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff8169ac20 a2=0 a3=7fff8169ac0c items=0 ppid=1482 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.149000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:29:15.156000 audit[1539]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.156000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd88a16320 a2=0 a3=7ffd88a1630c items=0 ppid=1482 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.156000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:29:15.161000 audit[1540]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.161000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe6d3ef0d0 a2=0 a3=7ffe6d3ef0bc items=0 ppid=1482 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.161000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:29:15.170657 kernel: Initializing XFRM netlink socket Jul 15 11:29:15.199297 env[1482]: time="2025-07-15T11:29:15.199265685Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:29:15.212000 audit[1548]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.212000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe41c45e20 a2=0 a3=7ffe41c45e0c items=0 ppid=1482 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.212000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 15 11:29:15.224000 audit[1551]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.224000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd7b3a3a10 a2=0 a3=7ffd7b3a39fc items=0 ppid=1482 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.224000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 15 11:29:15.226000 audit[1554]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.226000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe48ad83c0 a2=0 a3=7ffe48ad83ac items=0 ppid=1482 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 15 11:29:15.228000 audit[1556]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.228000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc3b4218f0 a2=0 a3=7ffc3b4218dc items=0 ppid=1482 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.228000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 15 11:29:15.229000 audit[1558]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.229000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd8174ffd0 a2=0 a3=7ffd8174ffbc items=0 ppid=1482 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 15 11:29:15.231000 audit[1560]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.231000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd46008f70 a2=0 a3=7ffd46008f5c items=0 ppid=1482 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.231000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 15 11:29:15.232000 audit[1562]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.232000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc4b0bec10 a2=0 a3=7ffc4b0bebfc items=0 ppid=1482 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.232000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 15 11:29:15.237000 audit[1565]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.237000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffdea31ae10 a2=0 a3=7ffdea31adfc items=0 ppid=1482 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.237000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 15 11:29:15.239000 audit[1567]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.239000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff66eaf990 a2=0 a3=7fff66eaf97c items=0 ppid=1482 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.239000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:29:15.240000 audit[1569]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.240000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff327dcf80 a2=0 a3=7fff327dcf6c items=0 ppid=1482 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.240000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:29:15.242000 audit[1571]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.242000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe73e97b00 a2=0 a3=7ffe73e97aec items=0 ppid=1482 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.242000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 15 11:29:15.243952 systemd-networkd[1077]: docker0: Link UP Jul 15 11:29:15.250000 audit[1575]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.250000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe800c69a0 a2=0 a3=7ffe800c698c items=0 ppid=1482 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.250000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:29:15.258000 audit[1576]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:15.258000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd557a910 a2=0 a3=7fffd557a8fc items=0 ppid=1482 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:15.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:29:15.259732 env[1482]: time="2025-07-15T11:29:15.259701388Z" level=info msg="Loading containers: done." Jul 15 11:29:15.271810 env[1482]: time="2025-07-15T11:29:15.271770164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:29:15.271957 env[1482]: time="2025-07-15T11:29:15.271932780Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:29:15.272037 env[1482]: time="2025-07-15T11:29:15.272018412Z" level=info msg="Daemon has completed initialization" Jul 15 11:29:15.289380 systemd[1]: Started docker.service. Jul 15 11:29:15.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:15.293821 env[1482]: time="2025-07-15T11:29:15.293789350Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:29:15.943653 env[1313]: time="2025-07-15T11:29:15.943593499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 11:29:16.473450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429166652.mount: Deactivated successfully. Jul 15 11:29:17.783783 env[1313]: time="2025-07-15T11:29:17.783734876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.785707 env[1313]: time="2025-07-15T11:29:17.785683141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.787175 env[1313]: time="2025-07-15T11:29:17.787145372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.788856 env[1313]: time="2025-07-15T11:29:17.788820559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.789418 env[1313]: time="2025-07-15T11:29:17.789388098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 11:29:17.789908 env[1313]: time="2025-07-15T11:29:17.789885504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 11:29:19.350294 env[1313]: time="2025-07-15T11:29:19.350236380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.352282 env[1313]: time="2025-07-15T11:29:19.352232292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.353754 env[1313]: time="2025-07-15T11:29:19.353733151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.356038 env[1313]: time="2025-07-15T11:29:19.356003492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.356704 env[1313]: time="2025-07-15T11:29:19.356676254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 11:29:19.357186 env[1313]: time="2025-07-15T11:29:19.357102224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 11:29:20.902452 env[1313]: time="2025-07-15T11:29:20.902403161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.904272 env[1313]: time="2025-07-15T11:29:20.904224246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.906003 env[1313]: time="2025-07-15T11:29:20.905974340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.907507 env[1313]: time="2025-07-15T11:29:20.907486127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.908097 env[1313]: time="2025-07-15T11:29:20.908062013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 11:29:20.908653 env[1313]: time="2025-07-15T11:29:20.908620400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 11:29:21.916056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:29:21.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:21.916229 systemd[1]: Stopped kubelet.service. Jul 15 11:29:21.917060 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 15 11:29:21.917102 kernel: audit: type=1130 audit(1752578961.915:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:21.917511 systemd[1]: Starting kubelet.service... Jul 15 11:29:21.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:21.922812 kernel: audit: type=1131 audit(1752578961.915:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:22.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:22.003501 systemd[1]: Started kubelet.service. Jul 15 11:29:22.007667 kernel: audit: type=1130 audit(1752578962.003:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:22.253809 kubelet[1622]: E0715 11:29:22.253453 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:29:22.256005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:29:22.256131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:29:22.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:29:22.259658 kernel: audit: type=1131 audit(1752578962.254:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:29:22.752010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039684989.mount: Deactivated successfully. Jul 15 11:29:23.339135 env[1313]: time="2025-07-15T11:29:23.339080162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:23.340841 env[1313]: time="2025-07-15T11:29:23.340789734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:23.342200 env[1313]: time="2025-07-15T11:29:23.342153221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:23.343663 env[1313]: time="2025-07-15T11:29:23.343602156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:23.344047 env[1313]: time="2025-07-15T11:29:23.344005802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 11:29:23.344650 env[1313]: time="2025-07-15T11:29:23.344610565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:29:23.874206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117785678.mount: Deactivated successfully. Jul 15 11:29:25.290755 env[1313]: time="2025-07-15T11:29:25.290689552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.292619 env[1313]: time="2025-07-15T11:29:25.292586518Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.294581 env[1313]: time="2025-07-15T11:29:25.294535875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.296105 env[1313]: time="2025-07-15T11:29:25.296070914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.296839 env[1313]: time="2025-07-15T11:29:25.296809378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 11:29:25.297336 env[1313]: time="2025-07-15T11:29:25.297288278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:29:25.794484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214787880.mount: Deactivated successfully. Jul 15 11:29:25.807672 env[1313]: time="2025-07-15T11:29:25.807617404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.809989 env[1313]: time="2025-07-15T11:29:25.809935110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.811287 env[1313]: time="2025-07-15T11:29:25.811255228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.812800 env[1313]: time="2025-07-15T11:29:25.812772180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:25.813178 env[1313]: time="2025-07-15T11:29:25.813133948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:29:25.813706 env[1313]: time="2025-07-15T11:29:25.813676465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 11:29:26.311019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389103957.mount: Deactivated successfully. Jul 15 11:29:28.809350 env[1313]: time="2025-07-15T11:29:28.809288349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:28.811132 env[1313]: time="2025-07-15T11:29:28.811089577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:28.812910 env[1313]: time="2025-07-15T11:29:28.812856701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:28.814472 env[1313]: time="2025-07-15T11:29:28.814439194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:28.815304 env[1313]: time="2025-07-15T11:29:28.815270659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 11:29:30.857137 systemd[1]: Stopped kubelet.service. Jul 15 11:29:30.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:30.859006 systemd[1]: Starting kubelet.service... Jul 15 11:29:30.863842 kernel: audit: type=1130 audit(1752578970.856:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:30.863917 kernel: audit: type=1131 audit(1752578970.856:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:30.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:30.898663 systemd[1]: Reloading. Jul 15 11:29:30.959781 /usr/lib/systemd/system-generators/torcx-generator[1680]: time="2025-07-15T11:29:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:29:30.960161 /usr/lib/systemd/system-generators/torcx-generator[1680]: time="2025-07-15T11:29:30Z" level=info msg="torcx already run" Jul 15 11:29:31.935940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:31.935964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:31.954755 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:32.022044 systemd[1]: Started kubelet.service. Jul 15 11:29:32.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.025675 kernel: audit: type=1130 audit(1752578972.021:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.026431 systemd[1]: Stopping kubelet.service... Jul 15 11:29:32.027198 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:29:32.027501 systemd[1]: Stopped kubelet.service. Jul 15 11:29:32.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.028784 systemd[1]: Starting kubelet.service... Jul 15 11:29:32.031673 kernel: audit: type=1131 audit(1752578972.026:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.108799 systemd[1]: Started kubelet.service. Jul 15 11:29:32.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.115653 kernel: audit: type=1130 audit(1752578972.110:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:32.141313 kubelet[1747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:32.141313 kubelet[1747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:29:32.141313 kubelet[1747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:32.141687 kubelet[1747]: I0715 11:29:32.141345 1747 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:29:32.642603 kubelet[1747]: I0715 11:29:32.642562 1747 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:29:32.642603 kubelet[1747]: I0715 11:29:32.642586 1747 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:29:32.642853 kubelet[1747]: I0715 11:29:32.642831 1747 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:29:32.658737 kubelet[1747]: E0715 11:29:32.658707 1747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:32.658953 kubelet[1747]: I0715 11:29:32.658934 1747 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:29:32.665633 kubelet[1747]: E0715 11:29:32.665609 1747 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:29:32.665711 kubelet[1747]: I0715 11:29:32.665650 1747 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:29:32.670067 kubelet[1747]: I0715 11:29:32.670049 1747 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:29:32.670772 kubelet[1747]: I0715 11:29:32.670747 1747 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:29:32.670989 kubelet[1747]: I0715 11:29:32.670957 1747 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:29:32.671170 kubelet[1747]: I0715 11:29:32.670987 1747 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:29:32.671259 kubelet[1747]: I0715 11:29:32.671176 1747 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:29:32.671259 kubelet[1747]: I0715 11:29:32.671186 1747 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:29:32.671308 kubelet[1747]: I0715 11:29:32.671265 1747 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:32.678092 kubelet[1747]: I0715 11:29:32.678066 1747 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:29:32.678092 kubelet[1747]: I0715 11:29:32.678088 1747 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:29:32.678174 kubelet[1747]: I0715 11:29:32.678112 1747 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:29:32.678174 kubelet[1747]: I0715 11:29:32.678134 1747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:29:32.690440 kubelet[1747]: W0715 11:29:32.690400 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:32.690507 kubelet[1747]: E0715 11:29:32.690443 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:32.692244 kubelet[1747]: W0715 11:29:32.692203 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:32.692296 kubelet[1747]: E0715 11:29:32.692250 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:32.693138 kubelet[1747]: I0715 11:29:32.693115 1747 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:29:32.693458 kubelet[1747]: I0715 11:29:32.693437 1747 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:29:32.693916 kubelet[1747]: W0715 11:29:32.693901 1747 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:29:32.695743 kubelet[1747]: I0715 11:29:32.695721 1747 server.go:1274] "Started kubelet" Jul 15 11:29:32.695802 kubelet[1747]: I0715 11:29:32.695773 1747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:29:32.696539 kubelet[1747]: I0715 11:29:32.696524 1747 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:29:32.695000 audit[1747]: AVC avc: denied { mac_admin } for pid=1747 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:32.695000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:32.700773 kubelet[1747]: I0715 11:29:32.696658 1747 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:29:32.700773 kubelet[1747]: I0715 11:29:32.696687 1747 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:29:32.700773 kubelet[1747]: I0715 11:29:32.696738 1747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:29:32.701453 kernel: audit: type=1400 audit(1752578972.695:192): avc: denied { mac_admin } for pid=1747 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:32.701487 kernel: audit: type=1401 audit(1752578972.695:192): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:32.701504 kernel: audit: type=1300 audit(1752578972.695:192): arch=c000003e syscall=188 success=no exit=-22 a0=c00088e630 a1=c000792930 a2=c00088e600 a3=25 items=0 ppid=1 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.695000 audit[1747]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00088e630 a1=c000792930 a2=c00088e600 a3=25 items=0 ppid=1 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.706338 kernel: audit: type=1327 audit(1752578972.695:192): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:32.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:32.707017 kubelet[1747]: I0715 11:29:32.706982 1747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:29:32.707176 kubelet[1747]: I0715 11:29:32.707156 1747 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:29:32.707605 kubelet[1747]: I0715 11:29:32.707578 1747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:29:32.707605 kubelet[1747]: I0715 11:29:32.707590 1747 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:29:32.708229 kubelet[1747]: E0715 11:29:32.707765 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:32.708573 kubelet[1747]: E0715 11:29:32.708448 1747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" Jul 15 11:29:32.709430 kubelet[1747]: I0715 11:29:32.708939 1747 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:29:32.709430 kubelet[1747]: W0715 11:29:32.709226 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:32.709430 kubelet[1747]: E0715 11:29:32.709256 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:32.709430 kubelet[1747]: I0715 11:29:32.709300 1747 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:29:32.709430 kubelet[1747]: I0715 11:29:32.709416 1747 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:29:32.709574 kubelet[1747]: I0715 11:29:32.709466 1747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:29:32.710508 kubelet[1747]: I0715 11:29:32.710457 1747 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:29:32.710562 kernel: audit: type=1400 audit(1752578972.696:193): avc: denied { mac_admin } for pid=1747 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:32.696000 audit[1747]: AVC avc: denied { mac_admin } for pid=1747 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:32.712555 kubelet[1747]: E0715 11:29:32.711483 1747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852694a44589b77 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:29:32.695698295 +0000 UTC m=+0.581368340,LastTimestamp:2025-07-15 11:29:32.695698295 +0000 UTC m=+0.581368340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:29:32.712668 kubelet[1747]: E0715 11:29:32.712609 1747 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:29:32.696000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:32.696000 audit[1747]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007ae8a0 a1=c000792948 a2=c00088e6c0 a3=25 items=0 ppid=1 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:32.696000 audit[1760]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.696000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed912e8d0 a2=0 a3=7ffed912e8bc items=0 ppid=1747 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:29:32.696000 audit[1761]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.696000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe59850f00 a2=0 a3=7ffe59850eec items=0 ppid=1747 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:29:32.706000 audit[1763]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.706000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff51e6a310 a2=0 a3=7fff51e6a2fc items=0 ppid=1747 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:29:32.710000 audit[1765]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.710000 audit[1765]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc005dc330 a2=0 a3=7ffc005dc31c items=0 ppid=1747 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:29:32.717000 audit[1768]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.717000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff8edb6f80 a2=0 a3=7fff8edb6f6c items=0 ppid=1747 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 15 11:29:32.719988 kubelet[1747]: I0715 11:29:32.719953 1747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:29:32.721778 kubelet[1747]: I0715 11:29:32.721752 1747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:29:32.721778 kubelet[1747]: I0715 11:29:32.721770 1747 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:29:32.721917 kubelet[1747]: I0715 11:29:32.721835 1747 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:29:32.721941 kubelet[1747]: E0715 11:29:32.721915 1747 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:29:32.720000 audit[1771]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:32.720000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffc8662390 a2=0 a3=7fffc866237c items=0 ppid=1747 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:29:32.721000 audit[1772]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.721000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdaab2fa0 a2=0 a3=7ffcdaab2f8c items=0 ppid=1747 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:29:32.721000 audit[1773]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.721000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd120911b0 a2=0 a3=7ffd1209119c items=0 ppid=1747 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:29:32.722000 audit[1774]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:32.722000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5a6826e0 a2=0 a3=7ffc5a6826cc items=0 ppid=1747 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:29:32.723000 audit[1775]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:32.723000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffced83fd00 a2=0 a3=7ffced83fcec items=0 ppid=1747 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:29:32.725518 kubelet[1747]: W0715 11:29:32.725474 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:32.725568 kubelet[1747]: E0715 11:29:32.725525 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:32.724000 audit[1780]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:32.724000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffec3a40e30 a2=0 a3=7ffec3a40e1c items=0 ppid=1747 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:29:32.725000 audit[1781]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:32.725000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd65ae3ce0 a2=0 a3=7ffd65ae3ccc items=0 ppid=1747 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:32.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:29:32.727399 kubelet[1747]: I0715 11:29:32.727380 1747 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:29:32.727399 kubelet[1747]: I0715 11:29:32.727393 1747 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:29:32.727468 kubelet[1747]: I0715 11:29:32.727408 1747 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:32.808552 kubelet[1747]: E0715 11:29:32.808522 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:32.822831 kubelet[1747]: E0715 11:29:32.822802 1747 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:29:32.909085 kubelet[1747]: E0715 11:29:32.909022 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:32.909418 kubelet[1747]: E0715 11:29:32.909391 1747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" Jul 15 11:29:33.009561 kubelet[1747]: E0715 11:29:33.009523 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.023784 kubelet[1747]: E0715 11:29:33.023752 1747 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:29:33.110037 kubelet[1747]: E0715 11:29:33.109989 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.188034 kubelet[1747]: I0715 11:29:33.187979 1747 policy_none.go:49] "None policy: Start" Jul 15 11:29:33.188886 kubelet[1747]: I0715 11:29:33.188867 1747 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:29:33.188886 kubelet[1747]: I0715 11:29:33.188887 1747 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:29:33.197855 kubelet[1747]: I0715 11:29:33.197821 1747 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:29:33.197000 audit[1747]: AVC avc: denied { mac_admin } for pid=1747 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:33.197000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:33.197000 audit[1747]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b606f0 a1=c000793998 a2=c000b60660 a3=25 items=0 ppid=1 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:33.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:33.198053 kubelet[1747]: I0715 11:29:33.197900 1747 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:29:33.198053 kubelet[1747]: I0715 11:29:33.198008 1747 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:29:33.198053 kubelet[1747]: I0715 11:29:33.198019 1747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:29:33.198793 kubelet[1747]: I0715 11:29:33.198764 1747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:29:33.199768 kubelet[1747]: E0715 11:29:33.199747 1747 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:29:33.299334 kubelet[1747]: I0715 11:29:33.299296 1747 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:29:33.299721 kubelet[1747]: E0715 11:29:33.299688 1747 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 15 11:29:33.310125 kubelet[1747]: E0715 11:29:33.310082 1747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" Jul 15 11:29:33.501617 kubelet[1747]: I0715 11:29:33.501512 1747 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:29:33.501973 kubelet[1747]: E0715 11:29:33.501939 1747 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 15 11:29:33.513213 kubelet[1747]: I0715 11:29:33.513197 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:33.513289 kubelet[1747]: I0715 11:29:33.513218 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:33.513289 kubelet[1747]: I0715 11:29:33.513234 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:33.513289 kubelet[1747]: I0715 11:29:33.513248 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:33.513289 kubelet[1747]: I0715 11:29:33.513261 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:33.513289 kubelet[1747]: I0715 11:29:33.513273 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:33.513426 kubelet[1747]: I0715 11:29:33.513285 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:33.513426 kubelet[1747]: I0715 11:29:33.513297 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:33.513426 kubelet[1747]: I0715 11:29:33.513317 1747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:33.529553 kubelet[1747]: W0715 11:29:33.529536 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:33.529603 kubelet[1747]: E0715 11:29:33.529562 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:33.639365 kubelet[1747]: W0715 11:29:33.639294 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:33.639365 kubelet[1747]: E0715 11:29:33.639357 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:33.669025 kubelet[1747]: W0715 11:29:33.668974 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:33.669025 kubelet[1747]: E0715 11:29:33.669015 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:33.729360 kubelet[1747]: E0715 11:29:33.729326 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.729737 kubelet[1747]: E0715 11:29:33.729721 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.730132 env[1313]: time="2025-07-15T11:29:33.729852598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eaf55351d5d8c327c1d57b8973e84263,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:33.730132 env[1313]: time="2025-07-15T11:29:33.729923955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:33.731097 kubelet[1747]: E0715 11:29:33.731079 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.731283 env[1313]: time="2025-07-15T11:29:33.731260854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:33.902828 kubelet[1747]: I0715 11:29:33.902737 1747 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:29:33.902990 kubelet[1747]: E0715 11:29:33.902961 1747 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Jul 15 11:29:33.950691 kubelet[1747]: W0715 11:29:33.950648 1747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Jul 15 11:29:33.950746 kubelet[1747]: E0715 11:29:33.950698 1747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:29:34.110952 kubelet[1747]: E0715 11:29:34.110899 1747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="1.6s" Jul 15 11:29:34.201177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815338729.mount: Deactivated successfully. Jul 15 11:29:34.204533 env[1313]: time="2025-07-15T11:29:34.204468814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.207706 env[1313]: time="2025-07-15T11:29:34.207658910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.208705 env[1313]: time="2025-07-15T11:29:34.208681861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.209298 env[1313]: time="2025-07-15T11:29:34.209271076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.210183 env[1313]: time="2025-07-15T11:29:34.210146840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.212395 env[1313]: time="2025-07-15T11:29:34.212362197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.213563 env[1313]: time="2025-07-15T11:29:34.213515046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.215940 env[1313]: time="2025-07-15T11:29:34.215906423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.218813 env[1313]: time="2025-07-15T11:29:34.218772573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.219942 env[1313]: time="2025-07-15T11:29:34.219917559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.221585 env[1313]: time="2025-07-15T11:29:34.221557066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.222887 env[1313]: time="2025-07-15T11:29:34.222861443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:34.231245 env[1313]: time="2025-07-15T11:29:34.231187452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:34.231245 env[1313]: time="2025-07-15T11:29:34.231226150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:34.231245 env[1313]: time="2025-07-15T11:29:34.231236924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:34.231444 env[1313]: time="2025-07-15T11:29:34.231383791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea0db35cfac7b359dc7f1015aedd2f5fe1ae0e2158c01b79086fc5e7d09c42ed pid=1790 runtime=io.containerd.runc.v2 Jul 15 11:29:34.252587 env[1313]: time="2025-07-15T11:29:34.252483789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:34.252587 env[1313]: time="2025-07-15T11:29:34.252532020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:34.252587 env[1313]: time="2025-07-15T11:29:34.252545026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:34.252762 env[1313]: time="2025-07-15T11:29:34.252682969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea65715deea93a66a244c7b0424f6230763ec29c0ba6926aa2eb5a1d08ad0a0a pid=1826 runtime=io.containerd.runc.v2 Jul 15 11:29:34.256346 env[1313]: time="2025-07-15T11:29:34.256288651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:34.256346 env[1313]: time="2025-07-15T11:29:34.256321736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:34.256346 env[1313]: time="2025-07-15T11:29:34.256333322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:34.256562 env[1313]: time="2025-07-15T11:29:34.256489701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26471bf12ba9b7c04fbc9c575d6cea5360f7e7540215cbae5b455241e6e89220 pid=1840 runtime=io.containerd.runc.v2 Jul 15 11:29:34.273790 env[1313]: time="2025-07-15T11:29:34.273742888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0db35cfac7b359dc7f1015aedd2f5fe1ae0e2158c01b79086fc5e7d09c42ed\"" Jul 15 11:29:34.274555 kubelet[1747]: E0715 11:29:34.274525 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:34.276138 env[1313]: time="2025-07-15T11:29:34.276110845Z" level=info msg="CreateContainer within sandbox \"ea0db35cfac7b359dc7f1015aedd2f5fe1ae0e2158c01b79086fc5e7d09c42ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:29:34.293945 env[1313]: time="2025-07-15T11:29:34.293696901Z" level=info msg="CreateContainer within sandbox \"ea0db35cfac7b359dc7f1015aedd2f5fe1ae0e2158c01b79086fc5e7d09c42ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2486af93d06f6b1ee9decf1eec54f16675799f31a2691823bc6f8b24a3db095e\"" Jul 15 11:29:34.293945 env[1313]: time="2025-07-15T11:29:34.293808822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea65715deea93a66a244c7b0424f6230763ec29c0ba6926aa2eb5a1d08ad0a0a\"" Jul 15 11:29:34.294268 env[1313]: time="2025-07-15T11:29:34.294235604Z" level=info msg="StartContainer for \"2486af93d06f6b1ee9decf1eec54f16675799f31a2691823bc6f8b24a3db095e\"" Jul 15 11:29:34.295308 kubelet[1747]: E0715 11:29:34.295281 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:34.296322 env[1313]: time="2025-07-15T11:29:34.296298792Z" level=info msg="CreateContainer within sandbox \"ea65715deea93a66a244c7b0424f6230763ec29c0ba6926aa2eb5a1d08ad0a0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:29:34.316697 env[1313]: time="2025-07-15T11:29:34.312809785Z" level=info msg="CreateContainer within sandbox \"ea65715deea93a66a244c7b0424f6230763ec29c0ba6926aa2eb5a1d08ad0a0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93e81581cb368efbfd769c89f74e578b377ea3f6595486c8fe0af632c706436b\"" Jul 15 11:29:34.316697 env[1313]: time="2025-07-15T11:29:34.313113361Z" level=info msg="StartContainer for \"93e81581cb368efbfd769c89f74e578b377ea3f6595486c8fe0af632c706436b\"" Jul 15 11:29:34.316697 env[1313]: time="2025-07-15T11:29:34.316546447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eaf55351d5d8c327c1d57b8973e84263,Namespace:kube-system,Attempt:0,} returns sandbox id \"26471bf12ba9b7c04fbc9c575d6cea5360f7e7540215cbae5b455241e6e89220\"" Jul 15 11:29:34.317156 kubelet[1747]: E0715 11:29:34.317130 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:34.318724 env[1313]: time="2025-07-15T11:29:34.318693043Z" level=info msg="CreateContainer within sandbox \"26471bf12ba9b7c04fbc9c575d6cea5360f7e7540215cbae5b455241e6e89220\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:29:34.330267 env[1313]: time="2025-07-15T11:29:34.330199093Z" level=info msg="CreateContainer within sandbox \"26471bf12ba9b7c04fbc9c575d6cea5360f7e7540215cbae5b455241e6e89220\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1905008e81e424fe529637fca4f926843f12a35186e611922f12c5cae7441315\"" Jul 15 11:29:34.330762 env[1313]: time="2025-07-15T11:29:34.330737976Z" level=info msg="StartContainer for \"1905008e81e424fe529637fca4f926843f12a35186e611922f12c5cae7441315\"" Jul 15 11:29:34.348922 env[1313]: time="2025-07-15T11:29:34.348876861Z" level=info msg="StartContainer for \"2486af93d06f6b1ee9decf1eec54f16675799f31a2691823bc6f8b24a3db095e\" returns successfully" Jul 15 11:29:34.367727 env[1313]: time="2025-07-15T11:29:34.365738031Z" level=info msg="StartContainer for \"93e81581cb368efbfd769c89f74e578b377ea3f6595486c8fe0af632c706436b\" returns successfully" Jul 15 11:29:34.392675 env[1313]: time="2025-07-15T11:29:34.390811869Z" level=info msg="StartContainer for \"1905008e81e424fe529637fca4f926843f12a35186e611922f12c5cae7441315\" returns successfully" Jul 15 11:29:34.703918 kubelet[1747]: I0715 11:29:34.703878 1747 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:29:34.730286 kubelet[1747]: E0715 11:29:34.730260 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:34.739652 kubelet[1747]: E0715 11:29:34.737021 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:34.739652 kubelet[1747]: E0715 11:29:34.738327 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:35.523521 kubelet[1747]: I0715 11:29:35.523354 1747 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:29:35.523521 kubelet[1747]: E0715 11:29:35.523388 1747 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:29:35.533773 kubelet[1747]: E0715 11:29:35.533710 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:35.634082 kubelet[1747]: E0715 11:29:35.634032 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:35.734838 kubelet[1747]: E0715 11:29:35.734784 1747 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:35.740024 kubelet[1747]: E0715 11:29:35.739987 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:35.901290 kubelet[1747]: E0715 11:29:35.901160 1747 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:35.901433 kubelet[1747]: E0715 11:29:35.901369 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.689633 kubelet[1747]: I0715 11:29:36.689596 1747 apiserver.go:52] "Watching apiserver" Jul 15 11:29:36.709500 kubelet[1747]: I0715 11:29:36.709471 1747 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:29:37.324958 kubelet[1747]: E0715 11:29:37.324924 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:37.546086 systemd[1]: Reloading. Jul 15 11:29:37.601419 /usr/lib/systemd/system-generators/torcx-generator[2036]: time="2025-07-15T11:29:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:29:37.601446 /usr/lib/systemd/system-generators/torcx-generator[2036]: time="2025-07-15T11:29:37Z" level=info msg="torcx already run" Jul 15 11:29:37.742432 kubelet[1747]: E0715 11:29:37.742403 1747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:37.762451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:37.762466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:37.781020 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:37.852003 systemd[1]: Stopping kubelet.service... Jul 15 11:29:37.878906 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:29:37.879148 systemd[1]: Stopped kubelet.service. Jul 15 11:29:37.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:37.880132 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 15 11:29:37.880195 kernel: audit: type=1131 audit(1752578977.878:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:37.880745 systemd[1]: Starting kubelet.service... Jul 15 11:29:37.968753 systemd[1]: Started kubelet.service. Jul 15 11:29:37.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:37.973661 kernel: audit: type=1130 audit(1752578977.968:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:38.023503 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:38.023503 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:29:38.023503 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:38.024346 kubelet[2092]: I0715 11:29:38.023588 2092 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:29:38.029322 kubelet[2092]: I0715 11:29:38.029299 2092 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:29:38.029322 kubelet[2092]: I0715 11:29:38.029316 2092 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:29:38.029534 kubelet[2092]: I0715 11:29:38.029517 2092 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:29:38.030651 kubelet[2092]: I0715 11:29:38.030629 2092 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:29:38.032169 kubelet[2092]: I0715 11:29:38.032155 2092 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:29:38.036255 kubelet[2092]: E0715 11:29:38.036211 2092 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:29:38.036255 kubelet[2092]: I0715 11:29:38.036234 2092 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:29:38.039417 kubelet[2092]: I0715 11:29:38.039396 2092 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:29:38.039696 kubelet[2092]: I0715 11:29:38.039682 2092 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:29:38.039779 kubelet[2092]: I0715 11:29:38.039759 2092 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:29:38.039929 kubelet[2092]: I0715 11:29:38.039778 2092 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:29:38.040028 kubelet[2092]: I0715 11:29:38.039935 2092 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:29:38.040028 kubelet[2092]: I0715 11:29:38.039944 2092 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:29:38.040028 kubelet[2092]: I0715 11:29:38.039964 2092 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:38.040095 kubelet[2092]: I0715 11:29:38.040042 2092 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:29:38.040095 kubelet[2092]: I0715 11:29:38.040052 2092 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:29:38.040095 kubelet[2092]: I0715 11:29:38.040075 2092 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:29:38.040095 kubelet[2092]: I0715 11:29:38.040084 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:29:38.040801 kubelet[2092]: I0715 11:29:38.040728 2092 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:29:38.041399 kubelet[2092]: I0715 11:29:38.041373 2092 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:29:38.041780 kubelet[2092]: I0715 11:29:38.041766 2092 server.go:1274] "Started kubelet" Jul 15 11:29:38.048833 kubelet[2092]: I0715 11:29:38.045774 2092 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:29:38.048833 kubelet[2092]: I0715 11:29:38.045972 2092 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:29:38.048833 kubelet[2092]: I0715 11:29:38.047099 2092 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:29:38.048833 kubelet[2092]: E0715 11:29:38.047687 2092 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:29:38.048000 audit[2092]: AVC avc: denied { mac_admin } for pid=2092 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:38.049229 kubelet[2092]: I0715 11:29:38.049167 2092 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:29:38.049229 kubelet[2092]: I0715 11:29:38.049196 2092 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:29:38.049229 kubelet[2092]: I0715 11:29:38.049217 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:29:38.050497 kubelet[2092]: I0715 11:29:38.050467 2092 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:29:38.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:38.053730 kernel: audit: type=1400 audit(1752578978.048:209): avc: denied { mac_admin } for pid=2092 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:38.053778 kernel: audit: type=1401 audit(1752578978.048:209): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:38.053793 kernel: audit: type=1300 audit(1752578978.048:209): arch=c000003e syscall=188 success=no exit=-22 a0=c0009bd740 a1=c0009085e8 a2=c0009bd710 a3=25 items=0 ppid=1 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:38.048000 audit[2092]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009bd740 a1=c0009085e8 a2=c0009bd710 a3=25 items=0 ppid=1 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:38.054762 kubelet[2092]: I0715 11:29:38.054461 2092 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:29:38.055202 kubelet[2092]: I0715 11:29:38.055185 2092 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:29:38.055451 kubelet[2092]: I0715 11:29:38.055436 2092 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:29:38.056283 kubelet[2092]: I0715 11:29:38.055926 2092 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:29:38.058327 kernel: audit: type=1327 audit(1752578978.048:209): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:38.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:38.063115 kubelet[2092]: I0715 11:29:38.063077 2092 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:29:38.063199 kubelet[2092]: I0715 11:29:38.063176 2092 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:29:38.048000 audit[2092]: AVC avc: denied { mac_admin } for pid=2092 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:38.065015 kubelet[2092]: I0715 11:29:38.064988 2092 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:29:38.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:38.068074 kernel: audit: type=1400 audit(1752578978.048:210): avc: denied { mac_admin } for pid=2092 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:38.068121 kernel: audit: type=1401 audit(1752578978.048:210): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:38.068136 kernel: audit: type=1300 audit(1752578978.048:210): arch=c000003e syscall=188 success=no exit=-22 a0=c000aa2d60 a1=c000908600 a2=c0009bd7d0 a3=25 items=0 ppid=1 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:38.048000 audit[2092]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000aa2d60 a1=c000908600 a2=c0009bd7d0 a3=25 items=0 ppid=1 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:38.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:38.074891 kubelet[2092]: I0715 11:29:38.074855 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:29:38.076659 kernel: audit: type=1327 audit(1752578978.048:210): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:38.078668 kubelet[2092]: I0715 11:29:38.077102 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:29:38.078668 kubelet[2092]: I0715 11:29:38.077120 2092 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:29:38.078668 kubelet[2092]: I0715 11:29:38.077236 2092 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:29:38.078668 kubelet[2092]: E0715 11:29:38.077272 2092 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:29:38.101900 kubelet[2092]: I0715 11:29:38.101877 2092 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:29:38.101900 kubelet[2092]: I0715 11:29:38.101892 2092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:29:38.102053 kubelet[2092]: I0715 11:29:38.101909 2092 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:38.102053 kubelet[2092]: I0715 11:29:38.102035 2092 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:29:38.102704 kubelet[2092]: I0715 11:29:38.102045 2092 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:29:38.102704 kubelet[2092]: I0715 11:29:38.102063 2092 policy_none.go:49] "None policy: Start" Jul 15 11:29:38.102704 kubelet[2092]: I0715 11:29:38.102537 2092 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:29:38.102704 kubelet[2092]: I0715 11:29:38.102550 2092 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:29:38.102704 kubelet[2092]: I0715 11:29:38.102685 2092 state_mem.go:75] "Updated machine memory state" Jul 15 11:29:38.103713 kubelet[2092]: I0715 11:29:38.103692 2092 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:29:38.103000 audit[2092]: AVC avc: denied { mac_admin } for pid=2092 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:29:38.103000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:29:38.103000 audit[2092]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001152720 a1=c000d85650 a2=c0011526f0 a3=25 items=0 ppid=1 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:38.103000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:29:38.103964 kubelet[2092]: I0715 11:29:38.103745 2092 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:29:38.103964 kubelet[2092]: I0715 11:29:38.103855 2092 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:29:38.103964 kubelet[2092]: I0715 11:29:38.103864 2092 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:29:38.104530 kubelet[2092]: I0715 11:29:38.104502 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:29:38.184467 kubelet[2092]: E0715 11:29:38.184427 2092 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.211508 kubelet[2092]: I0715 11:29:38.211480 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:29:38.216781 kubelet[2092]: I0715 11:29:38.216711 2092 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 11:29:38.216864 kubelet[2092]: I0715 11:29:38.216788 2092 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:29:38.256713 kubelet[2092]: I0715 11:29:38.256678 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:38.256713 kubelet[2092]: I0715 11:29:38.256712 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.256713 kubelet[2092]: I0715 11:29:38.256730 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.256903 kubelet[2092]: I0715 11:29:38.256744 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.256903 kubelet[2092]: I0715 11:29:38.256763 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.256903 kubelet[2092]: I0715 11:29:38.256782 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:38.256903 kubelet[2092]: I0715 11:29:38.256804 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:38.256903 kubelet[2092]: I0715 11:29:38.256818 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaf55351d5d8c327c1d57b8973e84263-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eaf55351d5d8c327c1d57b8973e84263\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:38.257008 kubelet[2092]: I0715 11:29:38.256832 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:38.483884 kubelet[2092]: E0715 11:29:38.483850 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:38.484856 kubelet[2092]: E0715 11:29:38.484831 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:38.485009 kubelet[2092]: E0715 11:29:38.484978 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:39.040629 kubelet[2092]: I0715 11:29:39.040599 2092 apiserver.go:52] "Watching apiserver" Jul 15 11:29:39.055589 kubelet[2092]: I0715 11:29:39.055539 2092 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:29:39.089361 kubelet[2092]: E0715 11:29:39.089310 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:39.091059 kubelet[2092]: E0715 11:29:39.091017 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:39.101124 kubelet[2092]: E0715 11:29:39.101093 2092 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:39.101230 kubelet[2092]: E0715 11:29:39.101202 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:39.108928 kubelet[2092]: I0715 11:29:39.108490 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.108471875 podStartE2EDuration="1.108471875s" podCreationTimestamp="2025-07-15 11:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:39.108294561 +0000 UTC m=+1.136293795" watchObservedRunningTime="2025-07-15 11:29:39.108471875 +0000 UTC m=+1.136471109" Jul 15 11:29:39.122137 kubelet[2092]: I0715 11:29:39.121986 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1219697260000001 podStartE2EDuration="1.121969726s" podCreationTimestamp="2025-07-15 11:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:39.116023129 +0000 UTC m=+1.144022353" watchObservedRunningTime="2025-07-15 11:29:39.121969726 +0000 UTC m=+1.149968960" Jul 15 11:29:39.130575 kubelet[2092]: I0715 11:29:39.130523 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.130508139 podStartE2EDuration="2.130508139s" podCreationTimestamp="2025-07-15 11:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:39.122168443 +0000 UTC m=+1.150167677" watchObservedRunningTime="2025-07-15 11:29:39.130508139 +0000 UTC m=+1.158507373" Jul 15 11:29:40.090185 kubelet[2092]: E0715 11:29:40.090145 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:40.925060 kubelet[2092]: E0715 11:29:40.925018 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:41.091244 kubelet[2092]: E0715 11:29:41.091207 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:42.600819 kubelet[2092]: I0715 11:29:42.600777 2092 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:29:42.601176 env[1313]: time="2025-07-15T11:29:42.601036741Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:29:42.601341 kubelet[2092]: I0715 11:29:42.601177 2092 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:29:43.688199 kubelet[2092]: I0715 11:29:43.688163 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf19ce6a-2fba-4532-b351-04148472673b-xtables-lock\") pod \"kube-proxy-vhgk8\" (UID: \"cf19ce6a-2fba-4532-b351-04148472673b\") " pod="kube-system/kube-proxy-vhgk8" Jul 15 11:29:43.688630 kubelet[2092]: I0715 11:29:43.688610 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf19ce6a-2fba-4532-b351-04148472673b-kube-proxy\") pod \"kube-proxy-vhgk8\" (UID: \"cf19ce6a-2fba-4532-b351-04148472673b\") " pod="kube-system/kube-proxy-vhgk8" Jul 15 11:29:43.688762 kubelet[2092]: I0715 11:29:43.688744 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf19ce6a-2fba-4532-b351-04148472673b-lib-modules\") pod \"kube-proxy-vhgk8\" (UID: \"cf19ce6a-2fba-4532-b351-04148472673b\") " pod="kube-system/kube-proxy-vhgk8" Jul 15 11:29:43.688850 kubelet[2092]: I0715 11:29:43.688832 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kppf2\" (UniqueName: \"kubernetes.io/projected/cf19ce6a-2fba-4532-b351-04148472673b-kube-api-access-kppf2\") pod \"kube-proxy-vhgk8\" (UID: \"cf19ce6a-2fba-4532-b351-04148472673b\") " pod="kube-system/kube-proxy-vhgk8" Jul 15 11:29:43.789059 kubelet[2092]: I0715 11:29:43.789024 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77b4cf0f-ed39-48d2-8da9-2974d292651a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-dznlk\" (UID: \"77b4cf0f-ed39-48d2-8da9-2974d292651a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-dznlk" Jul 15 11:29:43.789059 kubelet[2092]: I0715 11:29:43.789072 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmdm\" (UniqueName: \"kubernetes.io/projected/77b4cf0f-ed39-48d2-8da9-2974d292651a-kube-api-access-slmdm\") pod \"tigera-operator-5bf8dfcb4-dznlk\" (UID: \"77b4cf0f-ed39-48d2-8da9-2974d292651a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-dznlk" Jul 15 11:29:43.793942 kubelet[2092]: I0715 11:29:43.793903 2092 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:29:43.929709 kubelet[2092]: E0715 11:29:43.929675 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.930218 env[1313]: time="2025-07-15T11:29:43.930179537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhgk8,Uid:cf19ce6a-2fba-4532-b351-04148472673b,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:43.949086 env[1313]: time="2025-07-15T11:29:43.948966255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:43.949086 env[1313]: time="2025-07-15T11:29:43.949033509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:43.949086 env[1313]: time="2025-07-15T11:29:43.949061995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:43.949395 env[1313]: time="2025-07-15T11:29:43.949324557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f2babc5825923047d9e5838927bd1863ff28b73e61d4b33ceb3379a60f34dd pid=2149 runtime=io.containerd.runc.v2 Jul 15 11:29:43.979555 env[1313]: time="2025-07-15T11:29:43.979500197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhgk8,Uid:cf19ce6a-2fba-4532-b351-04148472673b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8f2babc5825923047d9e5838927bd1863ff28b73e61d4b33ceb3379a60f34dd\"" Jul 15 11:29:43.980246 kubelet[2092]: E0715 11:29:43.980217 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.982735 env[1313]: time="2025-07-15T11:29:43.982673978Z" level=info msg="CreateContainer within sandbox \"c8f2babc5825923047d9e5838927bd1863ff28b73e61d4b33ceb3379a60f34dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:29:43.992827 env[1313]: time="2025-07-15T11:29:43.992785253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-dznlk,Uid:77b4cf0f-ed39-48d2-8da9-2974d292651a,Namespace:tigera-operator,Attempt:0,}" Jul 15 11:29:43.997448 env[1313]: time="2025-07-15T11:29:43.997401398Z" level=info msg="CreateContainer within sandbox \"c8f2babc5825923047d9e5838927bd1863ff28b73e61d4b33ceb3379a60f34dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bcd606e8e80c24b36de98b10b1e7877f9f3b38eb512c60dffb15d3239f5ecd9\"" Jul 15 11:29:43.997900 env[1313]: time="2025-07-15T11:29:43.997876602Z" level=info msg="StartContainer for \"7bcd606e8e80c24b36de98b10b1e7877f9f3b38eb512c60dffb15d3239f5ecd9\"" Jul 15 11:29:44.010370 env[1313]: time="2025-07-15T11:29:44.010295318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:44.010563 env[1313]: time="2025-07-15T11:29:44.010526045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:44.010762 env[1313]: time="2025-07-15T11:29:44.010550703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:44.011155 env[1313]: time="2025-07-15T11:29:44.010976376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77c2c3efef70e7cb5d650efaad185dd3e24dbb2ea07feb0afcd091ace9a50811 pid=2204 runtime=io.containerd.runc.v2 Jul 15 11:29:44.048446 env[1313]: time="2025-07-15T11:29:44.048401260Z" level=info msg="StartContainer for \"7bcd606e8e80c24b36de98b10b1e7877f9f3b38eb512c60dffb15d3239f5ecd9\" returns successfully" Jul 15 11:29:44.061175 env[1313]: time="2025-07-15T11:29:44.059985007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-dznlk,Uid:77b4cf0f-ed39-48d2-8da9-2974d292651a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"77c2c3efef70e7cb5d650efaad185dd3e24dbb2ea07feb0afcd091ace9a50811\"" Jul 15 11:29:44.062722 env[1313]: time="2025-07-15T11:29:44.062579572Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 11:29:44.096855 kubelet[2092]: E0715 11:29:44.096816 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:44.106815 kubelet[2092]: I0715 11:29:44.105762 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhgk8" podStartSLOduration=1.105747262 podStartE2EDuration="1.105747262s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:44.105509211 +0000 UTC m=+6.133508445" watchObservedRunningTime="2025-07-15 11:29:44.105747262 +0000 UTC m=+6.133746496" Jul 15 11:29:44.150000 audit[2294]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.152340 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 15 11:29:44.152396 kernel: audit: type=1325 audit(1752578984.150:212): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.150000 audit[2295]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.157506 kernel: audit: type=1325 audit(1752578984.150:213): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.157555 kernel: audit: type=1300 audit(1752578984.150:213): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb3cb91b0 a2=0 a3=25522c25bba536ee items=0 ppid=2223 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.150000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb3cb91b0 a2=0 a3=25522c25bba536ee items=0 ppid=2223 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:29:44.163970 kernel: audit: type=1327 audit(1752578984.150:213): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:29:44.164008 kernel: audit: type=1325 audit(1752578984.152:214): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.152000 audit[2296]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.152000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef91fbf40 a2=0 a3=7ffef91fbf2c items=0 ppid=2223 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.170268 kernel: audit: type=1300 audit(1752578984.152:214): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef91fbf40 a2=0 a3=7ffef91fbf2c items=0 ppid=2223 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.170299 kernel: audit: type=1327 audit(1752578984.152:214): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:29:44.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:29:44.154000 audit[2297]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.174371 kernel: audit: type=1325 audit(1752578984.154:215): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.174392 kernel: audit: type=1300 audit(1752578984.154:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe97a34820 a2=0 a3=7ffe97a3480c items=0 ppid=2223 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.154000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe97a34820 a2=0 a3=7ffe97a3480c items=0 ppid=2223 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:29:44.180796 kernel: audit: type=1327 audit(1752578984.154:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:29:44.150000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0a2d1140 a2=0 a3=7ffe0a2d112c items=0 ppid=2223 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:29:44.157000 audit[2298]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.157000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1685b80 a2=0 a3=7ffdb1685b6c items=0 ppid=2223 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:29:44.158000 audit[2299]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.158000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd00421300 a2=0 a3=7ffd004212ec items=0 ppid=2223 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:29:44.252000 audit[2300]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.252000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd87b8d920 a2=0 a3=7ffd87b8d90c items=0 ppid=2223 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:29:44.255000 audit[2302]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.255000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff63498b60 a2=0 a3=7fff63498b4c items=0 ppid=2223 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 15 11:29:44.258000 audit[2305]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.258000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc30382000 a2=0 a3=7ffc30381fec items=0 ppid=2223 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 15 11:29:44.259000 audit[2306]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.259000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe906ad960 a2=0 a3=7ffe906ad94c items=0 ppid=2223 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:29:44.261000 audit[2308]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.261000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff41d111d0 a2=0 a3=7fff41d111bc items=0 ppid=2223 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:29:44.261000 audit[2309]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.261000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff86c9b6f0 a2=0 a3=7fff86c9b6dc items=0 ppid=2223 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:29:44.264000 audit[2311]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.264000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd060797f0 a2=0 a3=7ffd060797dc items=0 ppid=2223 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:29:44.266000 audit[2314]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.266000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff79416de0 a2=0 a3=7fff79416dcc items=0 ppid=2223 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 15 11:29:44.267000 audit[2315]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.267000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf82c7120 a2=0 a3=7ffdf82c710c items=0 ppid=2223 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:29:44.269000 audit[2317]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.269000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9d243810 a2=0 a3=7ffd9d2437fc items=0 ppid=2223 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:29:44.270000 audit[2318]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.270000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe7afc080 a2=0 a3=7fffe7afc06c items=0 ppid=2223 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:29:44.272000 audit[2320]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.272000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc40302380 a2=0 a3=7ffc4030236c items=0 ppid=2223 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:29:44.275000 audit[2323]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.275000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb5dc1f60 a2=0 a3=7ffcb5dc1f4c items=0 ppid=2223 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:29:44.277000 audit[2326]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.277000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdc4c7ee0 a2=0 a3=7fffdc4c7ecc items=0 ppid=2223 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:29:44.278000 audit[2327]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.278000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc6909e20 a2=0 a3=7ffdc6909e0c items=0 ppid=2223 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:29:44.280000 audit[2329]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.280000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd4a415bb0 a2=0 a3=7ffd4a415b9c items=0 ppid=2223 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:29:44.283000 audit[2332]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.283000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc674a370 a2=0 a3=7fffc674a35c items=0 ppid=2223 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:29:44.284000 audit[2333]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.284000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff5effd90 a2=0 a3=7ffff5effd7c items=0 ppid=2223 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:29:44.286000 audit[2335]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:29:44.286000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc2edece80 a2=0 a3=7ffc2edece6c items=0 ppid=2223 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:29:44.306000 audit[2341]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:44.306000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff16f96d50 a2=0 a3=7fff16f96d3c items=0 ppid=2223 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:44.314000 audit[2341]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:44.314000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff16f96d50 a2=0 a3=7fff16f96d3c items=0 ppid=2223 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:44.315000 audit[2346]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.315000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd87dafbc0 a2=0 a3=7ffd87dafbac items=0 ppid=2223 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:29:44.318000 audit[2348]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.318000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd5b73c030 a2=0 a3=7ffd5b73c01c items=0 ppid=2223 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 15 11:29:44.321000 audit[2351]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.321000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff740826d0 a2=0 a3=7fff740826bc items=0 ppid=2223 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.321000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 15 11:29:44.322000 audit[2352]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.322000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3bfb5c20 a2=0 a3=7fff3bfb5c0c items=0 ppid=2223 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:29:44.324000 audit[2354]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.324000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecb1f7d10 a2=0 a3=7ffecb1f7cfc items=0 ppid=2223 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:29:44.325000 audit[2355]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.325000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed2cb6e20 a2=0 a3=7ffed2cb6e0c items=0 ppid=2223 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:29:44.327000 audit[2357]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.327000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffda2e9dcd0 a2=0 a3=7ffda2e9dcbc items=0 ppid=2223 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 15 11:29:44.330000 audit[2360]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.330000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd45c74fe0 a2=0 a3=7ffd45c74fcc items=0 ppid=2223 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:29:44.331000 audit[2361]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.331000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf205b770 a2=0 a3=7ffcf205b75c items=0 ppid=2223 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:29:44.333000 audit[2363]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.333000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7de43630 a2=0 a3=7fff7de4361c items=0 ppid=2223 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:29:44.333000 audit[2364]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.333000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3ac8cdc0 a2=0 a3=7ffe3ac8cdac items=0 ppid=2223 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:29:44.335000 audit[2366]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.335000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed60ec420 a2=0 a3=7ffed60ec40c items=0 ppid=2223 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:29:44.338000 audit[2369]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.338000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7c3b1990 a2=0 a3=7ffe7c3b197c items=0 ppid=2223 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:29:44.341000 audit[2372]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.341000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe80bfd660 a2=0 a3=7ffe80bfd64c items=0 ppid=2223 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 15 11:29:44.342000 audit[2373]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.342000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0af07940 a2=0 a3=7fff0af0792c items=0 ppid=2223 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:29:44.345000 audit[2375]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.345000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcb6c78e10 a2=0 a3=7ffcb6c78dfc items=0 ppid=2223 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:29:44.348000 audit[2378]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.348000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd4c078f20 a2=0 a3=7ffd4c078f0c items=0 ppid=2223 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:29:44.349000 audit[2379]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.349000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc5608f20 a2=0 a3=7ffcc5608f0c items=0 ppid=2223 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:29:44.350000 audit[2381]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.350000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd7ed43650 a2=0 a3=7ffd7ed4363c items=0 ppid=2223 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:29:44.351000 audit[2382]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.351000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2c2d2960 a2=0 a3=7ffc2c2d294c items=0 ppid=2223 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:29:44.353000 audit[2384]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.353000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb947f050 a2=0 a3=7ffeb947f03c items=0 ppid=2223 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:29:44.355000 audit[2387]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:29:44.355000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff4d71ed20 a2=0 a3=7fff4d71ed0c items=0 ppid=2223 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:29:44.358000 audit[2389]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:29:44.358000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe982f6070 a2=0 a3=7ffe982f605c items=0 ppid=2223 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.358000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:44.358000 audit[2389]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:29:44.358000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe982f6070 a2=0 a3=7ffe982f605c items=0 ppid=2223 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:44.358000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:45.206842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696132826.mount: Deactivated successfully. Jul 15 11:29:45.939544 env[1313]: time="2025-07-15T11:29:45.939470862Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.941341 env[1313]: time="2025-07-15T11:29:45.941316154Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.943086 env[1313]: time="2025-07-15T11:29:45.943042712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.945795 env[1313]: time="2025-07-15T11:29:45.945747660Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.946103 env[1313]: time="2025-07-15T11:29:45.946063563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 11:29:45.948246 env[1313]: time="2025-07-15T11:29:45.948191413Z" level=info msg="CreateContainer within sandbox \"77c2c3efef70e7cb5d650efaad185dd3e24dbb2ea07feb0afcd091ace9a50811\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 11:29:45.960053 env[1313]: time="2025-07-15T11:29:45.959982112Z" level=info msg="CreateContainer within sandbox \"77c2c3efef70e7cb5d650efaad185dd3e24dbb2ea07feb0afcd091ace9a50811\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c0c7daa663df304f19b483145dde7021093b8129f720eb6a5dc05c54c6c487d9\"" Jul 15 11:29:45.960615 env[1313]: time="2025-07-15T11:29:45.960587246Z" level=info msg="StartContainer for \"c0c7daa663df304f19b483145dde7021093b8129f720eb6a5dc05c54c6c487d9\"" Jul 15 11:29:46.005471 env[1313]: time="2025-07-15T11:29:46.005416271Z" level=info msg="StartContainer for \"c0c7daa663df304f19b483145dde7021093b8129f720eb6a5dc05c54c6c487d9\" returns successfully" Jul 15 11:29:46.109581 kubelet[2092]: I0715 11:29:46.109520 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-dznlk" podStartSLOduration=1.22338659 podStartE2EDuration="3.109499388s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="2025-07-15 11:29:44.061017681 +0000 UTC m=+6.089016905" lastFinishedPulling="2025-07-15 11:29:45.947130469 +0000 UTC m=+7.975129703" observedRunningTime="2025-07-15 11:29:46.10941393 +0000 UTC m=+8.137413164" watchObservedRunningTime="2025-07-15 11:29:46.109499388 +0000 UTC m=+8.137498622" Jul 15 11:29:47.938214 kubelet[2092]: E0715 11:29:47.938179 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:48.105306 kubelet[2092]: E0715 11:29:48.105252 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:50.657450 kubelet[2092]: E0715 11:29:50.657413 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:50.929827 kubelet[2092]: E0715 11:29:50.929784 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:53.464480 sudo[1470]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:53.463000 audit[1470]: USER_END pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:53.477424 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 15 11:29:53.477481 kernel: audit: type=1106 audit(1752578993.463:263): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:53.478557 sshd[1463]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:53.464000 audit[1470]: CRED_DISP pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:53.481140 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:48868.service: Deactivated successfully. Jul 15 11:29:53.482294 systemd-logind[1289]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:29:53.482372 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:29:53.483375 systemd-logind[1289]: Removed session 7. Jul 15 11:29:53.484308 kernel: audit: type=1104 audit(1752578993.464:264): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:29:53.484371 kernel: audit: type=1106 audit(1752578993.478:265): pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:53.478000 audit[1463]: USER_END pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:53.488473 kernel: audit: type=1104 audit(1752578993.479:266): pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:53.479000 audit[1463]: CRED_DISP pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:29:53.491729 kernel: audit: type=1131 audit(1752578993.480:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.41:22-10.0.0.1:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:53.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.41:22-10.0.0.1:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:29:54.183000 audit[2482]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.183000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd479fc810 a2=0 a3=7ffd479fc7fc items=0 ppid=2223 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.192023 kernel: audit: type=1325 audit(1752578994.183:268): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.192104 kernel: audit: type=1300 audit(1752578994.183:268): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd479fc810 a2=0 a3=7ffd479fc7fc items=0 ppid=2223 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.192124 kernel: audit: type=1327 audit(1752578994.183:268): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:54.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:54.195000 audit[2482]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.195000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd479fc810 a2=0 a3=0 items=0 ppid=2223 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.205414 kernel: audit: type=1325 audit(1752578994.195:269): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.205488 kernel: audit: type=1300 audit(1752578994.195:269): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd479fc810 a2=0 a3=0 items=0 ppid=2223 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:54.221000 audit[2484]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.221000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff161176d0 a2=0 a3=7fff161176bc items=0 ppid=2223 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:54.226000 audit[2484]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:54.226000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff161176d0 a2=0 a3=0 items=0 ppid=2223 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:54.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:56.105765 update_engine[1293]: I0715 11:29:56.105706 1293 update_attempter.cc:509] Updating boot flags... Jul 15 11:29:56.713000 audit[2501]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:56.713000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe4bcf6b0 a2=0 a3=7fffe4bcf69c items=0 ppid=2223 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:56.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:56.725000 audit[2501]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:56.725000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe4bcf6b0 a2=0 a3=0 items=0 ppid=2223 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:56.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:56.734000 audit[2503]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:56.734000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff2dba8aa0 a2=0 a3=7fff2dba8a8c items=0 ppid=2223 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:56.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:56.739000 audit[2503]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:56.739000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2dba8aa0 a2=0 a3=0 items=0 ppid=2223 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:56.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:56.975032 kubelet[2092]: I0715 11:29:56.974922 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzqp\" (UniqueName: \"kubernetes.io/projected/32fb159c-8f70-42a2-84ae-b9d31e2455b4-kube-api-access-tfzqp\") pod \"calico-typha-5c9c5f76b9-8qlbm\" (UID: \"32fb159c-8f70-42a2-84ae-b9d31e2455b4\") " pod="calico-system/calico-typha-5c9c5f76b9-8qlbm" Jul 15 11:29:56.975032 kubelet[2092]: I0715 11:29:56.974959 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32fb159c-8f70-42a2-84ae-b9d31e2455b4-tigera-ca-bundle\") pod \"calico-typha-5c9c5f76b9-8qlbm\" (UID: \"32fb159c-8f70-42a2-84ae-b9d31e2455b4\") " pod="calico-system/calico-typha-5c9c5f76b9-8qlbm" Jul 15 11:29:56.975032 kubelet[2092]: I0715 11:29:56.974974 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/32fb159c-8f70-42a2-84ae-b9d31e2455b4-typha-certs\") pod \"calico-typha-5c9c5f76b9-8qlbm\" (UID: \"32fb159c-8f70-42a2-84ae-b9d31e2455b4\") " pod="calico-system/calico-typha-5c9c5f76b9-8qlbm" Jul 15 11:29:57.086038 kubelet[2092]: E0715 11:29:57.085553 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:57.086171 env[1313]: time="2025-07-15T11:29:57.085968706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9c5f76b9-8qlbm,Uid:32fb159c-8f70-42a2-84ae-b9d31e2455b4,Namespace:calico-system,Attempt:0,}" Jul 15 11:29:57.101404 env[1313]: time="2025-07-15T11:29:57.101321733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:57.101480 env[1313]: time="2025-07-15T11:29:57.101405053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:57.101480 env[1313]: time="2025-07-15T11:29:57.101427626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:57.101691 env[1313]: time="2025-07-15T11:29:57.101628424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95ea617d45cd7e3b3d3f6332772f0463ddd2cda2bb6f3916e4f91268f3bf0f32 pid=2514 runtime=io.containerd.runc.v2 Jul 15 11:29:57.150782 env[1313]: time="2025-07-15T11:29:57.150730795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9c5f76b9-8qlbm,Uid:32fb159c-8f70-42a2-84ae-b9d31e2455b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"95ea617d45cd7e3b3d3f6332772f0463ddd2cda2bb6f3916e4f91268f3bf0f32\"" Jul 15 11:29:57.151440 kubelet[2092]: E0715 11:29:57.151410 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:57.152284 env[1313]: time="2025-07-15T11:29:57.152249682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 11:29:57.176035 kubelet[2092]: I0715 11:29:57.176003 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-xtables-lock\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176035 kubelet[2092]: I0715 11:29:57.176038 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-policysync\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176222 kubelet[2092]: I0715 11:29:57.176054 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-flexvol-driver-host\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176222 kubelet[2092]: I0715 11:29:57.176069 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-cni-log-dir\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176222 kubelet[2092]: I0715 11:29:57.176082 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dcf0e6d-bb25-496e-897a-67edfd8043a1-tigera-ca-bundle\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176222 kubelet[2092]: I0715 11:29:57.176094 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-cni-net-dir\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176222 kubelet[2092]: I0715 11:29:57.176107 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3dcf0e6d-bb25-496e-897a-67edfd8043a1-node-certs\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176378 kubelet[2092]: I0715 11:29:57.176121 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-cni-bin-dir\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176378 kubelet[2092]: I0715 11:29:57.176136 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2s77\" (UniqueName: \"kubernetes.io/projected/3dcf0e6d-bb25-496e-897a-67edfd8043a1-kube-api-access-r2s77\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176378 kubelet[2092]: I0715 11:29:57.176151 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-lib-modules\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176378 kubelet[2092]: I0715 11:29:57.176163 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-var-lib-calico\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.176378 kubelet[2092]: I0715 11:29:57.176178 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3dcf0e6d-bb25-496e-897a-67edfd8043a1-var-run-calico\") pod \"calico-node-v8t5b\" (UID: \"3dcf0e6d-bb25-496e-897a-67edfd8043a1\") " pod="calico-system/calico-node-v8t5b" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277260 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.279667 kubelet[2092]: W0715 11:29:57.277281 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277304 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277544 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.279667 kubelet[2092]: W0715 11:29:57.277562 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277587 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277786 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.279667 kubelet[2092]: W0715 11:29:57.277795 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.277811 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.279667 kubelet[2092]: E0715 11:29:57.278020 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280022 kubelet[2092]: W0715 11:29:57.278028 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278043 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278393 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280022 kubelet[2092]: W0715 11:29:57.278408 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278426 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278675 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280022 kubelet[2092]: W0715 11:29:57.278697 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278726 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280022 kubelet[2092]: E0715 11:29:57.278899 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280022 kubelet[2092]: W0715 11:29:57.278907 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.278915 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279388 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280240 kubelet[2092]: W0715 11:29:57.279398 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279409 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279627 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280240 kubelet[2092]: W0715 11:29:57.279646 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279703 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279819 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280240 kubelet[2092]: W0715 11:29:57.279824 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280240 kubelet[2092]: E0715 11:29:57.279871 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.280478 kubelet[2092]: E0715 11:29:57.279942 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.280478 kubelet[2092]: W0715 11:29:57.279947 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.280478 kubelet[2092]: E0715 11:29:57.279954 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.287139 kubelet[2092]: E0715 11:29:57.287083 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:29:57.293259 kubelet[2092]: E0715 11:29:57.293230 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.293259 kubelet[2092]: W0715 11:29:57.293250 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.293259 kubelet[2092]: E0715 11:29:57.293268 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.376982 kubelet[2092]: E0715 11:29:57.376944 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.377164 kubelet[2092]: W0715 11:29:57.377129 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.377164 kubelet[2092]: E0715 11:29:57.377162 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.377476 kubelet[2092]: E0715 11:29:57.377460 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.377476 kubelet[2092]: W0715 11:29:57.377476 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.377570 kubelet[2092]: E0715 11:29:57.377493 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.377686 kubelet[2092]: E0715 11:29:57.377674 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.377686 kubelet[2092]: W0715 11:29:57.377685 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.377777 kubelet[2092]: E0715 11:29:57.377694 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.377910 kubelet[2092]: E0715 11:29:57.377886 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.377910 kubelet[2092]: W0715 11:29:57.377894 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.377979 kubelet[2092]: E0715 11:29:57.377911 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.378186 kubelet[2092]: E0715 11:29:57.378173 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.378186 kubelet[2092]: W0715 11:29:57.378185 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.378248 kubelet[2092]: E0715 11:29:57.378195 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.378409 kubelet[2092]: E0715 11:29:57.378383 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.378409 kubelet[2092]: W0715 11:29:57.378395 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.378409 kubelet[2092]: E0715 11:29:57.378406 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.378617 kubelet[2092]: E0715 11:29:57.378553 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.378617 kubelet[2092]: W0715 11:29:57.378560 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.378617 kubelet[2092]: E0715 11:29:57.378568 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.378734 kubelet[2092]: E0715 11:29:57.378721 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.378734 kubelet[2092]: W0715 11:29:57.378731 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.378792 kubelet[2092]: E0715 11:29:57.378738 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.378909 kubelet[2092]: E0715 11:29:57.378893 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.378909 kubelet[2092]: W0715 11:29:57.378904 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.378995 kubelet[2092]: E0715 11:29:57.378914 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379053 kubelet[2092]: E0715 11:29:57.379026 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379053 kubelet[2092]: W0715 11:29:57.379038 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379053 kubelet[2092]: E0715 11:29:57.379046 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379175 kubelet[2092]: E0715 11:29:57.379153 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379175 kubelet[2092]: W0715 11:29:57.379166 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379175 kubelet[2092]: E0715 11:29:57.379173 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379342 kubelet[2092]: E0715 11:29:57.379324 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379425 kubelet[2092]: W0715 11:29:57.379336 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379425 kubelet[2092]: E0715 11:29:57.379364 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379550 kubelet[2092]: E0715 11:29:57.379535 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379550 kubelet[2092]: W0715 11:29:57.379546 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379619 kubelet[2092]: E0715 11:29:57.379555 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379746 kubelet[2092]: E0715 11:29:57.379734 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379746 kubelet[2092]: W0715 11:29:57.379744 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379804 kubelet[2092]: E0715 11:29:57.379751 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.379927 kubelet[2092]: E0715 11:29:57.379899 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.379927 kubelet[2092]: W0715 11:29:57.379908 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.379927 kubelet[2092]: E0715 11:29:57.379914 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380087 kubelet[2092]: E0715 11:29:57.380063 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380087 kubelet[2092]: W0715 11:29:57.380073 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380087 kubelet[2092]: E0715 11:29:57.380080 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380237 kubelet[2092]: E0715 11:29:57.380223 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380237 kubelet[2092]: W0715 11:29:57.380232 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380237 kubelet[2092]: E0715 11:29:57.380238 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380385 kubelet[2092]: E0715 11:29:57.380371 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380385 kubelet[2092]: W0715 11:29:57.380379 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380385 kubelet[2092]: E0715 11:29:57.380386 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380536 kubelet[2092]: E0715 11:29:57.380524 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380536 kubelet[2092]: W0715 11:29:57.380532 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380585 kubelet[2092]: E0715 11:29:57.380544 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380716 kubelet[2092]: E0715 11:29:57.380704 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380716 kubelet[2092]: W0715 11:29:57.380714 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380764 kubelet[2092]: E0715 11:29:57.380721 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.380913 kubelet[2092]: E0715 11:29:57.380899 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.380913 kubelet[2092]: W0715 11:29:57.380907 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.380913 kubelet[2092]: E0715 11:29:57.380914 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.381006 kubelet[2092]: I0715 11:29:57.380935 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ea4b49fb-f94a-4309-9631-1c291cb3db4b-varrun\") pod \"csi-node-driver-96lqs\" (UID: \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\") " pod="calico-system/csi-node-driver-96lqs" Jul 15 11:29:57.381098 kubelet[2092]: E0715 11:29:57.381084 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.381098 kubelet[2092]: W0715 11:29:57.381094 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.381152 kubelet[2092]: E0715 11:29:57.381107 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.381152 kubelet[2092]: I0715 11:29:57.381120 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b49fb-f94a-4309-9631-1c291cb3db4b-socket-dir\") pod \"csi-node-driver-96lqs\" (UID: \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\") " pod="calico-system/csi-node-driver-96lqs" Jul 15 11:29:57.381312 kubelet[2092]: E0715 11:29:57.381297 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.381312 kubelet[2092]: W0715 11:29:57.381308 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.381399 kubelet[2092]: E0715 11:29:57.381322 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.381399 kubelet[2092]: I0715 11:29:57.381338 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfks\" (UniqueName: \"kubernetes.io/projected/ea4b49fb-f94a-4309-9631-1c291cb3db4b-kube-api-access-pqfks\") pod \"csi-node-driver-96lqs\" (UID: \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\") " pod="calico-system/csi-node-driver-96lqs" Jul 15 11:29:57.381542 kubelet[2092]: E0715 11:29:57.381521 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.381542 kubelet[2092]: W0715 11:29:57.381535 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.381666 kubelet[2092]: E0715 11:29:57.381551 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.381666 kubelet[2092]: I0715 11:29:57.381578 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b49fb-f94a-4309-9631-1c291cb3db4b-kubelet-dir\") pod \"csi-node-driver-96lqs\" (UID: \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\") " pod="calico-system/csi-node-driver-96lqs" Jul 15 11:29:57.381738 kubelet[2092]: E0715 11:29:57.381725 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.381738 kubelet[2092]: W0715 11:29:57.381733 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.381793 kubelet[2092]: E0715 11:29:57.381746 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.381897 kubelet[2092]: E0715 11:29:57.381886 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.381897 kubelet[2092]: W0715 11:29:57.381894 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.381947 kubelet[2092]: E0715 11:29:57.381906 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382074 kubelet[2092]: E0715 11:29:57.382061 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382074 kubelet[2092]: W0715 11:29:57.382073 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382122 kubelet[2092]: E0715 11:29:57.382086 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382223 kubelet[2092]: E0715 11:29:57.382207 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382223 kubelet[2092]: W0715 11:29:57.382218 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382330 kubelet[2092]: E0715 11:29:57.382229 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382396 kubelet[2092]: E0715 11:29:57.382384 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382396 kubelet[2092]: W0715 11:29:57.382393 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382446 kubelet[2092]: E0715 11:29:57.382404 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382556 kubelet[2092]: E0715 11:29:57.382546 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382583 kubelet[2092]: W0715 11:29:57.382556 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382583 kubelet[2092]: E0715 11:29:57.382568 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382736 kubelet[2092]: E0715 11:29:57.382727 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382760 kubelet[2092]: W0715 11:29:57.382735 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382760 kubelet[2092]: E0715 11:29:57.382746 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.382800 kubelet[2092]: I0715 11:29:57.382761 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b49fb-f94a-4309-9631-1c291cb3db4b-registration-dir\") pod \"csi-node-driver-96lqs\" (UID: \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\") " pod="calico-system/csi-node-driver-96lqs" Jul 15 11:29:57.382913 kubelet[2092]: E0715 11:29:57.382900 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.382913 kubelet[2092]: W0715 11:29:57.382908 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.382974 kubelet[2092]: E0715 11:29:57.382946 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.383061 kubelet[2092]: E0715 11:29:57.383048 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.383061 kubelet[2092]: W0715 11:29:57.383056 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.383142 kubelet[2092]: E0715 11:29:57.383069 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.383214 kubelet[2092]: E0715 11:29:57.383204 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.383214 kubelet[2092]: W0715 11:29:57.383211 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.383276 kubelet[2092]: E0715 11:29:57.383218 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.383377 kubelet[2092]: E0715 11:29:57.383366 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.383377 kubelet[2092]: W0715 11:29:57.383374 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.383445 kubelet[2092]: E0715 11:29:57.383381 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.483125 kubelet[2092]: E0715 11:29:57.483099 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.483125 kubelet[2092]: W0715 11:29:57.483115 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.483125 kubelet[2092]: E0715 11:29:57.483131 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.483451 kubelet[2092]: E0715 11:29:57.483433 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.483451 kubelet[2092]: W0715 11:29:57.483447 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.483525 kubelet[2092]: E0715 11:29:57.483458 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.483750 kubelet[2092]: E0715 11:29:57.483703 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.483750 kubelet[2092]: W0715 11:29:57.483724 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.483750 kubelet[2092]: E0715 11:29:57.483750 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484001 kubelet[2092]: E0715 11:29:57.483983 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484001 kubelet[2092]: W0715 11:29:57.483995 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484089 kubelet[2092]: E0715 11:29:57.484006 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484215 kubelet[2092]: E0715 11:29:57.484200 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484215 kubelet[2092]: W0715 11:29:57.484211 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484282 kubelet[2092]: E0715 11:29:57.484223 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484433 kubelet[2092]: E0715 11:29:57.484414 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484433 kubelet[2092]: W0715 11:29:57.484430 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484508 kubelet[2092]: E0715 11:29:57.484447 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484651 kubelet[2092]: E0715 11:29:57.484619 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484651 kubelet[2092]: W0715 11:29:57.484631 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484726 kubelet[2092]: E0715 11:29:57.484660 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484824 kubelet[2092]: E0715 11:29:57.484809 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484824 kubelet[2092]: W0715 11:29:57.484821 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484899 kubelet[2092]: E0715 11:29:57.484845 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.484946 kubelet[2092]: E0715 11:29:57.484932 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.484946 kubelet[2092]: W0715 11:29:57.484941 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.484996 kubelet[2092]: E0715 11:29:57.484962 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.485050 kubelet[2092]: E0715 11:29:57.485038 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.485050 kubelet[2092]: W0715 11:29:57.485048 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.485102 kubelet[2092]: E0715 11:29:57.485059 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.485179 kubelet[2092]: E0715 11:29:57.485163 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.485179 kubelet[2092]: W0715 11:29:57.485173 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.485248 kubelet[2092]: E0715 11:29:57.485184 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.485300 kubelet[2092]: E0715 11:29:57.485288 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.485300 kubelet[2092]: W0715 11:29:57.485297 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.485360 kubelet[2092]: E0715 11:29:57.485316 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.485435 kubelet[2092]: E0715 11:29:57.485421 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.485435 kubelet[2092]: W0715 11:29:57.485431 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.485550 kubelet[2092]: E0715 11:29:57.485511 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.485746 kubelet[2092]: E0715 11:29:57.485726 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.485800 kubelet[2092]: W0715 11:29:57.485744 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.485800 kubelet[2092]: E0715 11:29:57.485783 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.486027 kubelet[2092]: E0715 11:29:57.486011 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.486027 kubelet[2092]: W0715 11:29:57.486024 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.486109 kubelet[2092]: E0715 11:29:57.486038 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.486243 kubelet[2092]: E0715 11:29:57.486219 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.486243 kubelet[2092]: W0715 11:29:57.486234 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.486329 kubelet[2092]: E0715 11:29:57.486248 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.486451 kubelet[2092]: E0715 11:29:57.486435 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.486451 kubelet[2092]: W0715 11:29:57.486447 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.486516 kubelet[2092]: E0715 11:29:57.486474 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.486684 kubelet[2092]: E0715 11:29:57.486664 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.486684 kubelet[2092]: W0715 11:29:57.486676 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.486752 kubelet[2092]: E0715 11:29:57.486718 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.486897 kubelet[2092]: E0715 11:29:57.486878 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.486942 kubelet[2092]: W0715 11:29:57.486903 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.486942 kubelet[2092]: E0715 11:29:57.486935 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.487103 kubelet[2092]: E0715 11:29:57.487085 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.487103 kubelet[2092]: W0715 11:29:57.487097 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.487168 kubelet[2092]: E0715 11:29:57.487129 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.487304 kubelet[2092]: E0715 11:29:57.487287 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.487304 kubelet[2092]: W0715 11:29:57.487298 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.487386 kubelet[2092]: E0715 11:29:57.487310 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.487509 kubelet[2092]: E0715 11:29:57.487481 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.487509 kubelet[2092]: W0715 11:29:57.487492 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.487509 kubelet[2092]: E0715 11:29:57.487504 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.487707 kubelet[2092]: E0715 11:29:57.487692 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.487707 kubelet[2092]: W0715 11:29:57.487705 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.487770 kubelet[2092]: E0715 11:29:57.487720 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.487976 kubelet[2092]: E0715 11:29:57.487959 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.487976 kubelet[2092]: W0715 11:29:57.487970 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.488039 kubelet[2092]: E0715 11:29:57.487981 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.488159 kubelet[2092]: E0715 11:29:57.488130 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.488159 kubelet[2092]: W0715 11:29:57.488141 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.488159 kubelet[2092]: E0715 11:29:57.488149 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.493660 kubelet[2092]: E0715 11:29:57.493575 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:29:57.493660 kubelet[2092]: W0715 11:29:57.493591 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:29:57.493660 kubelet[2092]: E0715 11:29:57.493603 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:29:57.594430 env[1313]: time="2025-07-15T11:29:57.594312996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v8t5b,Uid:3dcf0e6d-bb25-496e-897a-67edfd8043a1,Namespace:calico-system,Attempt:0,}" Jul 15 11:29:57.610854 env[1313]: time="2025-07-15T11:29:57.610765100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:57.610854 env[1313]: time="2025-07-15T11:29:57.610823913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:57.610854 env[1313]: time="2025-07-15T11:29:57.610839534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:57.611162 env[1313]: time="2025-07-15T11:29:57.611121526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4 pid=2642 runtime=io.containerd.runc.v2 Jul 15 11:29:57.647054 env[1313]: time="2025-07-15T11:29:57.646993493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v8t5b,Uid:3dcf0e6d-bb25-496e-897a-67edfd8043a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\"" Jul 15 11:29:57.748000 audit[2676]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:57.748000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc1473920 a2=0 a3=7ffcc147390c items=0 ppid=2223 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:57.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:57.753000 audit[2676]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:29:57.753000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc1473920 a2=0 a3=0 items=0 ppid=2223 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:29:57.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:29:58.544196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170201034.mount: Deactivated successfully. Jul 15 11:29:59.078182 kubelet[2092]: E0715 11:29:59.078136 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:00.120211 env[1313]: time="2025-07-15T11:30:00.120159599Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:00.219777 env[1313]: time="2025-07-15T11:30:00.219716421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:00.222875 env[1313]: time="2025-07-15T11:30:00.222830631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:00.226225 env[1313]: time="2025-07-15T11:30:00.226179611Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:00.226723 env[1313]: time="2025-07-15T11:30:00.226625607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 11:30:00.228702 env[1313]: time="2025-07-15T11:30:00.228630907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 11:30:00.241686 env[1313]: time="2025-07-15T11:30:00.241630246Z" level=info msg="CreateContainer within sandbox \"95ea617d45cd7e3b3d3f6332772f0463ddd2cda2bb6f3916e4f91268f3bf0f32\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 11:30:00.255191 env[1313]: time="2025-07-15T11:30:00.255145625Z" level=info msg="CreateContainer within sandbox \"95ea617d45cd7e3b3d3f6332772f0463ddd2cda2bb6f3916e4f91268f3bf0f32\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc3aa3fa44f6f3aa95dae120b048f60cb22862cc2366595983481dee1ccbaa60\"" Jul 15 11:30:00.255854 env[1313]: time="2025-07-15T11:30:00.255815392Z" level=info msg="StartContainer for \"fc3aa3fa44f6f3aa95dae120b048f60cb22862cc2366595983481dee1ccbaa60\"" Jul 15 11:30:00.312218 env[1313]: time="2025-07-15T11:30:00.312156849Z" level=info msg="StartContainer for \"fc3aa3fa44f6f3aa95dae120b048f60cb22862cc2366595983481dee1ccbaa60\" returns successfully" Jul 15 11:30:01.077680 kubelet[2092]: E0715 11:30:01.077626 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:01.129616 kubelet[2092]: E0715 11:30:01.129587 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:01.206237 kubelet[2092]: E0715 11:30:01.206206 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.206237 kubelet[2092]: W0715 11:30:01.206229 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.206449 kubelet[2092]: E0715 11:30:01.206251 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.206568 kubelet[2092]: E0715 11:30:01.206554 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.206568 kubelet[2092]: W0715 11:30:01.206564 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.206652 kubelet[2092]: E0715 11:30:01.206574 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.206771 kubelet[2092]: E0715 11:30:01.206758 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.206771 kubelet[2092]: W0715 11:30:01.206769 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.206841 kubelet[2092]: E0715 11:30:01.206779 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.206958 kubelet[2092]: E0715 11:30:01.206944 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.206958 kubelet[2092]: W0715 11:30:01.206953 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207028 kubelet[2092]: E0715 11:30:01.206964 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.207127 kubelet[2092]: E0715 11:30:01.207113 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.207127 kubelet[2092]: W0715 11:30:01.207121 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207199 kubelet[2092]: E0715 11:30:01.207130 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.207287 kubelet[2092]: E0715 11:30:01.207273 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.207287 kubelet[2092]: W0715 11:30:01.207284 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207354 kubelet[2092]: E0715 11:30:01.207293 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.207475 kubelet[2092]: E0715 11:30:01.207461 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.207475 kubelet[2092]: W0715 11:30:01.207472 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207545 kubelet[2092]: E0715 11:30:01.207480 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.207656 kubelet[2092]: E0715 11:30:01.207633 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.207656 kubelet[2092]: W0715 11:30:01.207655 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207727 kubelet[2092]: E0715 11:30:01.207664 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.207848 kubelet[2092]: E0715 11:30:01.207834 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.207848 kubelet[2092]: W0715 11:30:01.207846 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.207917 kubelet[2092]: E0715 11:30:01.207854 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208009 kubelet[2092]: E0715 11:30:01.207996 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208009 kubelet[2092]: W0715 11:30:01.208004 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208084 kubelet[2092]: E0715 11:30:01.208011 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208164 kubelet[2092]: E0715 11:30:01.208151 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208164 kubelet[2092]: W0715 11:30:01.208161 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208234 kubelet[2092]: E0715 11:30:01.208169 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208350 kubelet[2092]: E0715 11:30:01.208336 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208350 kubelet[2092]: W0715 11:30:01.208347 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208415 kubelet[2092]: E0715 11:30:01.208355 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208520 kubelet[2092]: E0715 11:30:01.208510 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208520 kubelet[2092]: W0715 11:30:01.208519 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208573 kubelet[2092]: E0715 11:30:01.208526 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208695 kubelet[2092]: E0715 11:30:01.208682 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208695 kubelet[2092]: W0715 11:30:01.208692 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208780 kubelet[2092]: E0715 11:30:01.208701 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.208873 kubelet[2092]: E0715 11:30:01.208861 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.208873 kubelet[2092]: W0715 11:30:01.208871 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.208926 kubelet[2092]: E0715 11:30:01.208880 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.306660 kubelet[2092]: E0715 11:30:01.306609 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.306660 kubelet[2092]: W0715 11:30:01.306626 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.306660 kubelet[2092]: E0715 11:30:01.306655 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.306838 kubelet[2092]: E0715 11:30:01.306833 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.306863 kubelet[2092]: W0715 11:30:01.306839 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.306863 kubelet[2092]: E0715 11:30:01.306851 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.307064 kubelet[2092]: E0715 11:30:01.307044 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.307064 kubelet[2092]: W0715 11:30:01.307053 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.307064 kubelet[2092]: E0715 11:30:01.307064 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.307241 kubelet[2092]: E0715 11:30:01.307224 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.307241 kubelet[2092]: W0715 11:30:01.307233 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.307297 kubelet[2092]: E0715 11:30:01.307244 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.307404 kubelet[2092]: E0715 11:30:01.307390 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.307404 kubelet[2092]: W0715 11:30:01.307398 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.307463 kubelet[2092]: E0715 11:30:01.307410 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.307569 kubelet[2092]: E0715 11:30:01.307555 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.307569 kubelet[2092]: W0715 11:30:01.307562 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.307624 kubelet[2092]: E0715 11:30:01.307573 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.307754 kubelet[2092]: E0715 11:30:01.307745 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.307754 kubelet[2092]: W0715 11:30:01.307752 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.307800 kubelet[2092]: E0715 11:30:01.307764 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308023 kubelet[2092]: E0715 11:30:01.307993 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308023 kubelet[2092]: W0715 11:30:01.308015 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308077 kubelet[2092]: E0715 11:30:01.308037 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308172 kubelet[2092]: E0715 11:30:01.308157 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308172 kubelet[2092]: W0715 11:30:01.308167 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308243 kubelet[2092]: E0715 11:30:01.308192 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308300 kubelet[2092]: E0715 11:30:01.308286 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308300 kubelet[2092]: W0715 11:30:01.308295 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308371 kubelet[2092]: E0715 11:30:01.308313 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308422 kubelet[2092]: E0715 11:30:01.308409 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308422 kubelet[2092]: W0715 11:30:01.308419 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308483 kubelet[2092]: E0715 11:30:01.308439 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308572 kubelet[2092]: E0715 11:30:01.308560 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308572 kubelet[2092]: W0715 11:30:01.308569 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308622 kubelet[2092]: E0715 11:30:01.308579 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.308732 kubelet[2092]: E0715 11:30:01.308720 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.308732 kubelet[2092]: W0715 11:30:01.308729 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.308791 kubelet[2092]: E0715 11:30:01.308738 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.309013 kubelet[2092]: E0715 11:30:01.308995 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.309013 kubelet[2092]: W0715 11:30:01.309008 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.309086 kubelet[2092]: E0715 11:30:01.309020 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.309190 kubelet[2092]: E0715 11:30:01.309177 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.309190 kubelet[2092]: W0715 11:30:01.309185 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.309241 kubelet[2092]: E0715 11:30:01.309195 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.309360 kubelet[2092]: E0715 11:30:01.309348 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.309360 kubelet[2092]: W0715 11:30:01.309357 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.309421 kubelet[2092]: E0715 11:30:01.309370 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.309579 kubelet[2092]: E0715 11:30:01.309562 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.309579 kubelet[2092]: W0715 11:30:01.309574 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.309675 kubelet[2092]: E0715 11:30:01.309584 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:01.309736 kubelet[2092]: E0715 11:30:01.309723 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:01.309736 kubelet[2092]: W0715 11:30:01.309732 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:01.309785 kubelet[2092]: E0715 11:30:01.309740 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.130401 kubelet[2092]: I0715 11:30:02.130364 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:30:02.130747 kubelet[2092]: E0715 11:30:02.130662 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:02.213672 kubelet[2092]: E0715 11:30:02.213653 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.213672 kubelet[2092]: W0715 11:30:02.213669 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.213786 kubelet[2092]: E0715 11:30:02.213685 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.213887 kubelet[2092]: E0715 11:30:02.213871 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.213887 kubelet[2092]: W0715 11:30:02.213882 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.213983 kubelet[2092]: E0715 11:30:02.213898 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214064 kubelet[2092]: E0715 11:30:02.214050 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214064 kubelet[2092]: W0715 11:30:02.214060 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214143 kubelet[2092]: E0715 11:30:02.214070 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214221 kubelet[2092]: E0715 11:30:02.214209 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214221 kubelet[2092]: W0715 11:30:02.214218 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214298 kubelet[2092]: E0715 11:30:02.214227 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214366 kubelet[2092]: E0715 11:30:02.214355 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214366 kubelet[2092]: W0715 11:30:02.214364 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214456 kubelet[2092]: E0715 11:30:02.214373 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214514 kubelet[2092]: E0715 11:30:02.214502 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214514 kubelet[2092]: W0715 11:30:02.214511 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214592 kubelet[2092]: E0715 11:30:02.214520 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214674 kubelet[2092]: E0715 11:30:02.214661 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214674 kubelet[2092]: W0715 11:30:02.214671 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214761 kubelet[2092]: E0715 11:30:02.214680 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214817 kubelet[2092]: E0715 11:30:02.214805 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214817 kubelet[2092]: W0715 11:30:02.214815 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.214893 kubelet[2092]: E0715 11:30:02.214823 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.214957 kubelet[2092]: E0715 11:30:02.214945 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.214957 kubelet[2092]: W0715 11:30:02.214955 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215035 kubelet[2092]: E0715 11:30:02.214964 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215097 kubelet[2092]: E0715 11:30:02.215085 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215097 kubelet[2092]: W0715 11:30:02.215094 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215171 kubelet[2092]: E0715 11:30:02.215104 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215247 kubelet[2092]: E0715 11:30:02.215235 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215247 kubelet[2092]: W0715 11:30:02.215244 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215324 kubelet[2092]: E0715 11:30:02.215253 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215389 kubelet[2092]: E0715 11:30:02.215377 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215389 kubelet[2092]: W0715 11:30:02.215386 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215478 kubelet[2092]: E0715 11:30:02.215394 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215548 kubelet[2092]: E0715 11:30:02.215536 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215548 kubelet[2092]: W0715 11:30:02.215546 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215627 kubelet[2092]: E0715 11:30:02.215554 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215708 kubelet[2092]: E0715 11:30:02.215696 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215708 kubelet[2092]: W0715 11:30:02.215706 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215785 kubelet[2092]: E0715 11:30:02.215715 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.215861 kubelet[2092]: E0715 11:30:02.215849 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.215861 kubelet[2092]: W0715 11:30:02.215858 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.215932 kubelet[2092]: E0715 11:30:02.215867 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314002 kubelet[2092]: E0715 11:30:02.313977 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314002 kubelet[2092]: W0715 11:30:02.313994 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.314162 kubelet[2092]: E0715 11:30:02.314027 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314202 kubelet[2092]: E0715 11:30:02.314185 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314202 kubelet[2092]: W0715 11:30:02.314193 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.314278 kubelet[2092]: E0715 11:30:02.314208 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314402 kubelet[2092]: E0715 11:30:02.314389 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314402 kubelet[2092]: W0715 11:30:02.314400 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.314480 kubelet[2092]: E0715 11:30:02.314415 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314607 kubelet[2092]: E0715 11:30:02.314594 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314607 kubelet[2092]: W0715 11:30:02.314606 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.314702 kubelet[2092]: E0715 11:30:02.314618 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314780 kubelet[2092]: E0715 11:30:02.314769 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314780 kubelet[2092]: W0715 11:30:02.314778 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.314829 kubelet[2092]: E0715 11:30:02.314788 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.314930 kubelet[2092]: E0715 11:30:02.314917 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.314930 kubelet[2092]: W0715 11:30:02.314928 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315026 kubelet[2092]: E0715 11:30:02.314941 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.315100 kubelet[2092]: E0715 11:30:02.315089 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.315145 kubelet[2092]: W0715 11:30:02.315097 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315145 kubelet[2092]: E0715 11:30:02.315117 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.315344 kubelet[2092]: E0715 11:30:02.315329 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.315344 kubelet[2092]: W0715 11:30:02.315341 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315418 kubelet[2092]: E0715 11:30:02.315355 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.315509 kubelet[2092]: E0715 11:30:02.315497 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.315509 kubelet[2092]: W0715 11:30:02.315506 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315573 kubelet[2092]: E0715 11:30:02.315516 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.315681 kubelet[2092]: E0715 11:30:02.315670 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.315681 kubelet[2092]: W0715 11:30:02.315678 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315768 kubelet[2092]: E0715 11:30:02.315699 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.315822 kubelet[2092]: E0715 11:30:02.315810 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.315847 kubelet[2092]: W0715 11:30:02.315822 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.315873 kubelet[2092]: E0715 11:30:02.315844 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316002 kubelet[2092]: E0715 11:30:02.315992 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316029 kubelet[2092]: W0715 11:30:02.316002 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.316029 kubelet[2092]: E0715 11:30:02.316015 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316209 kubelet[2092]: E0715 11:30:02.316193 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316209 kubelet[2092]: W0715 11:30:02.316207 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.316279 kubelet[2092]: E0715 11:30:02.316224 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316449 kubelet[2092]: E0715 11:30:02.316428 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316497 kubelet[2092]: W0715 11:30:02.316461 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.316497 kubelet[2092]: E0715 11:30:02.316474 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316612 kubelet[2092]: E0715 11:30:02.316600 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316649 kubelet[2092]: W0715 11:30:02.316611 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.316649 kubelet[2092]: E0715 11:30:02.316626 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316777 kubelet[2092]: E0715 11:30:02.316768 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316804 kubelet[2092]: W0715 11:30:02.316777 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.316804 kubelet[2092]: E0715 11:30:02.316785 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.316941 kubelet[2092]: E0715 11:30:02.316930 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.316941 kubelet[2092]: W0715 11:30:02.316939 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.317008 kubelet[2092]: E0715 11:30:02.316947 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:02.317218 kubelet[2092]: E0715 11:30:02.317193 2092 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:30:02.317218 kubelet[2092]: W0715 11:30:02.317208 2092 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:30:02.317218 kubelet[2092]: E0715 11:30:02.317216 2092 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:30:03.077993 kubelet[2092]: E0715 11:30:03.077942 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:04.069676 env[1313]: time="2025-07-15T11:30:04.069611104Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:04.071317 env[1313]: time="2025-07-15T11:30:04.071284124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:04.072889 env[1313]: time="2025-07-15T11:30:04.072857554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:04.074211 env[1313]: time="2025-07-15T11:30:04.074183388Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:04.074694 env[1313]: time="2025-07-15T11:30:04.074668798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 11:30:04.076296 env[1313]: time="2025-07-15T11:30:04.076271072Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 11:30:04.088135 env[1313]: time="2025-07-15T11:30:04.088079777Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f\"" Jul 15 11:30:04.088559 env[1313]: time="2025-07-15T11:30:04.088534577Z" level=info msg="StartContainer for \"38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f\"" Jul 15 11:30:04.132167 env[1313]: time="2025-07-15T11:30:04.132118674Z" level=info msg="StartContainer for \"38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f\" returns successfully" Jul 15 11:30:04.154389 kubelet[2092]: I0715 11:30:04.154312 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c9c5f76b9-8qlbm" podStartSLOduration=5.077921675 podStartE2EDuration="8.154264069s" podCreationTimestamp="2025-07-15 11:29:56 +0000 UTC" firstStartedPulling="2025-07-15 11:29:57.151884147 +0000 UTC m=+19.179883391" lastFinishedPulling="2025-07-15 11:30:00.228226551 +0000 UTC m=+22.256225785" observedRunningTime="2025-07-15 11:30:01.437089305 +0000 UTC m=+23.465088539" watchObservedRunningTime="2025-07-15 11:30:04.154264069 +0000 UTC m=+26.182263303" Jul 15 11:30:04.161724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f-rootfs.mount: Deactivated successfully. Jul 15 11:30:04.294932 env[1313]: time="2025-07-15T11:30:04.294864527Z" level=info msg="shim disconnected" id=38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f Jul 15 11:30:04.294932 env[1313]: time="2025-07-15T11:30:04.294928119Z" level=warning msg="cleaning up after shim disconnected" id=38cd4334dd66a9d81d34ce9e91a85c2663fde9e212b3eccfd0a484617a09362f namespace=k8s.io Jul 15 11:30:04.294932 env[1313]: time="2025-07-15T11:30:04.294937547Z" level=info msg="cleaning up dead shim" Jul 15 11:30:04.301170 env[1313]: time="2025-07-15T11:30:04.301131944Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2844 runtime=io.containerd.runc.v2\n" Jul 15 11:30:05.078694 kubelet[2092]: E0715 11:30:05.078606 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:05.142656 env[1313]: time="2025-07-15T11:30:05.142590888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 11:30:07.078501 kubelet[2092]: E0715 11:30:07.078449 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:09.078271 kubelet[2092]: E0715 11:30:09.078210 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:11.078305 kubelet[2092]: E0715 11:30:11.078258 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:12.851511 env[1313]: time="2025-07-15T11:30:12.851446405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:12.853519 env[1313]: time="2025-07-15T11:30:12.853471107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:12.855107 env[1313]: time="2025-07-15T11:30:12.855079026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:12.856658 env[1313]: time="2025-07-15T11:30:12.856623533Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:12.857138 env[1313]: time="2025-07-15T11:30:12.857110990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 11:30:12.858934 env[1313]: time="2025-07-15T11:30:12.858904433Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 11:30:13.004100 env[1313]: time="2025-07-15T11:30:13.004035042Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7\"" Jul 15 11:30:13.004789 env[1313]: time="2025-07-15T11:30:13.004737287Z" level=info msg="StartContainer for \"81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7\"" Jul 15 11:30:13.078078 kubelet[2092]: E0715 11:30:13.077941 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:13.600080 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:44926.service. Jul 15 11:30:13.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.41:22-10.0.0.1:44926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:13.604578 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 15 11:30:13.604725 kernel: audit: type=1130 audit(1752579013.598:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.41:22-10.0.0.1:44926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:13.634782 env[1313]: time="2025-07-15T11:30:13.634457441Z" level=info msg="StartContainer for \"81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7\" returns successfully" Jul 15 11:30:13.640000 audit[2892]: USER_ACCT pid=2892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.642873 sshd[2892]: Accepted publickey for core from 10.0.0.1 port 44926 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:13.646240 sshd[2892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:13.644000 audit[2892]: CRED_ACQ pid=2892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.651080 systemd-logind[1289]: New session 8 of user core. Jul 15 11:30:13.651776 kernel: audit: type=1101 audit(1752579013.640:279): pid=2892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.651827 kernel: audit: type=1103 audit(1752579013.644:280): pid=2892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.651859 kernel: audit: type=1006 audit(1752579013.644:281): pid=2892 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 15 11:30:13.652069 systemd[1]: Started session-8.scope. Jul 15 11:30:13.644000 audit[2892]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5062eb50 a2=3 a3=0 items=0 ppid=1 pid=2892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:13.659569 kernel: audit: type=1300 audit(1752579013.644:281): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5062eb50 a2=3 a3=0 items=0 ppid=1 pid=2892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:13.659691 kernel: audit: type=1327 audit(1752579013.644:281): proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:13.644000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:13.656000 audit[2892]: USER_START pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.665400 kernel: audit: type=1105 audit(1752579013.656:282): pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.665438 kernel: audit: type=1103 audit(1752579013.657:283): pid=2895 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.657000 audit[2895]: CRED_ACQ pid=2895 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.792439 kubelet[2092]: I0715 11:30:13.792368 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:30:13.867884 kubelet[2092]: E0715 11:30:13.792854 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:13.873164 sshd[2892]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:13.872000 audit[2892]: USER_END pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.875716 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:44926.service: Deactivated successfully. Jul 15 11:30:13.876567 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:30:13.881040 kernel: audit: type=1106 audit(1752579013.872:284): pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.872000 audit[2892]: CRED_DISP pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.882187 systemd-logind[1289]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:30:13.889231 kernel: audit: type=1104 audit(1752579013.872:285): pid=2892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:13.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.41:22-10.0.0.1:44926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:13.887419 systemd-logind[1289]: Removed session 8. Jul 15 11:30:13.891000 audit[2909]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:13.891000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc73f9f80 a2=0 a3=7fffc73f9f6c items=0 ppid=2223 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:13.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:13.897000 audit[2909]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:13.897000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffc73f9f80 a2=0 a3=7fffc73f9f6c items=0 ppid=2223 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:13.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:14.639042 kubelet[2092]: E0715 11:30:14.639011 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:14.657691 env[1313]: time="2025-07-15T11:30:14.657608071Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:30:14.672549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7-rootfs.mount: Deactivated successfully. Jul 15 11:30:14.675472 env[1313]: time="2025-07-15T11:30:14.675424563Z" level=info msg="shim disconnected" id=81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7 Jul 15 11:30:14.675472 env[1313]: time="2025-07-15T11:30:14.675465510Z" level=warning msg="cleaning up after shim disconnected" id=81062991d930965fb68c0fe0540a6ed08ef6fe5b46e1cadfde8737342f9241f7 namespace=k8s.io Jul 15 11:30:14.675472 env[1313]: time="2025-07-15T11:30:14.675473225Z" level=info msg="cleaning up dead shim" Jul 15 11:30:14.680689 env[1313]: time="2025-07-15T11:30:14.680646888Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Jul 15 11:30:14.722354 kubelet[2092]: I0715 11:30:14.722336 2092 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 11:30:14.803990 kubelet[2092]: I0715 11:30:14.803937 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7edd760f-4b3e-4f59-9e90-ee9828b261c3-config-volume\") pod \"coredns-7c65d6cfc9-pbq8g\" (UID: \"7edd760f-4b3e-4f59-9e90-ee9828b261c3\") " pod="kube-system/coredns-7c65d6cfc9-pbq8g" Jul 15 11:30:14.803990 kubelet[2092]: I0715 11:30:14.803976 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-ca-bundle\") pod \"whisker-55867d57ff-4mhmr\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " pod="calico-system/whisker-55867d57ff-4mhmr" Jul 15 11:30:14.804202 kubelet[2092]: I0715 11:30:14.804007 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0745c5fc-ce0f-47aa-8707-bacfa72cacb9-tigera-ca-bundle\") pod \"calico-kube-controllers-78c7897fc4-w24xn\" (UID: \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\") " pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" Jul 15 11:30:14.804202 kubelet[2092]: I0715 11:30:14.804027 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfv6f\" (UniqueName: \"kubernetes.io/projected/01a942dc-88ea-4854-a722-52bcabc6d456-kube-api-access-rfv6f\") pod \"whisker-55867d57ff-4mhmr\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " pod="calico-system/whisker-55867d57ff-4mhmr" Jul 15 11:30:14.804202 kubelet[2092]: I0715 11:30:14.804045 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqhj6\" (UniqueName: \"kubernetes.io/projected/0745c5fc-ce0f-47aa-8707-bacfa72cacb9-kube-api-access-rqhj6\") pod \"calico-kube-controllers-78c7897fc4-w24xn\" (UID: \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\") " pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" Jul 15 11:30:14.804202 kubelet[2092]: I0715 11:30:14.804075 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxp5\" (UniqueName: \"kubernetes.io/projected/7edd760f-4b3e-4f59-9e90-ee9828b261c3-kube-api-access-nrxp5\") pod \"coredns-7c65d6cfc9-pbq8g\" (UID: \"7edd760f-4b3e-4f59-9e90-ee9828b261c3\") " pod="kube-system/coredns-7c65d6cfc9-pbq8g" Jul 15 11:30:14.804202 kubelet[2092]: I0715 11:30:14.804109 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-backend-key-pair\") pod \"whisker-55867d57ff-4mhmr\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " pod="calico-system/whisker-55867d57ff-4mhmr" Jul 15 11:30:14.804356 kubelet[2092]: I0715 11:30:14.804128 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad1dfeee-bd95-4e9e-b226-86afd94e0964-calico-apiserver-certs\") pod \"calico-apiserver-77c5cfffc-tnzhf\" (UID: \"ad1dfeee-bd95-4e9e-b226-86afd94e0964\") " pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" Jul 15 11:30:14.804356 kubelet[2092]: I0715 11:30:14.804147 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5acc111e-02a1-439a-93ba-39e1bce08fb2-config-volume\") pod \"coredns-7c65d6cfc9-7tnjd\" (UID: \"5acc111e-02a1-439a-93ba-39e1bce08fb2\") " pod="kube-system/coredns-7c65d6cfc9-7tnjd" Jul 15 11:30:14.804356 kubelet[2092]: I0715 11:30:14.804169 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssbn\" (UniqueName: \"kubernetes.io/projected/ad1dfeee-bd95-4e9e-b226-86afd94e0964-kube-api-access-jssbn\") pod \"calico-apiserver-77c5cfffc-tnzhf\" (UID: \"ad1dfeee-bd95-4e9e-b226-86afd94e0964\") " pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" Jul 15 11:30:14.804356 kubelet[2092]: I0715 11:30:14.804195 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsvnl\" (UniqueName: \"kubernetes.io/projected/5acc111e-02a1-439a-93ba-39e1bce08fb2-kube-api-access-rsvnl\") pod \"coredns-7c65d6cfc9-7tnjd\" (UID: \"5acc111e-02a1-439a-93ba-39e1bce08fb2\") " pod="kube-system/coredns-7c65d6cfc9-7tnjd" Jul 15 11:30:14.905534 kubelet[2092]: I0715 11:30:14.905395 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdr52\" (UniqueName: \"kubernetes.io/projected/b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4-kube-api-access-mdr52\") pod \"calico-apiserver-77c5cfffc-xsvx6\" (UID: \"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4\") " pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" Jul 15 11:30:14.905534 kubelet[2092]: I0715 11:30:14.905431 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsn2n\" (UniqueName: \"kubernetes.io/projected/9b356024-f0d5-45bf-a4bc-f2e9fe1afa45-kube-api-access-hsn2n\") pod \"goldmane-58fd7646b9-phmvm\" (UID: \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\") " pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:14.905534 kubelet[2092]: I0715 11:30:14.905484 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4-calico-apiserver-certs\") pod \"calico-apiserver-77c5cfffc-xsvx6\" (UID: \"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4\") " pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" Jul 15 11:30:14.905534 kubelet[2092]: I0715 11:30:14.905511 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b356024-f0d5-45bf-a4bc-f2e9fe1afa45-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-phmvm\" (UID: \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\") " pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:14.905534 kubelet[2092]: I0715 11:30:14.905525 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9b356024-f0d5-45bf-a4bc-f2e9fe1afa45-goldmane-key-pair\") pod \"goldmane-58fd7646b9-phmvm\" (UID: \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\") " pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:14.905775 kubelet[2092]: I0715 11:30:14.905557 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b356024-f0d5-45bf-a4bc-f2e9fe1afa45-config\") pod \"goldmane-58fd7646b9-phmvm\" (UID: \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\") " pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:15.041269 kubelet[2092]: E0715 11:30:15.041234 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:15.041922 env[1313]: time="2025-07-15T11:30:15.041880082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pbq8g,Uid:7edd760f-4b3e-4f59-9e90-ee9828b261c3,Namespace:kube-system,Attempt:0,}" Jul 15 11:30:15.045830 kubelet[2092]: E0715 11:30:15.045806 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:15.046131 env[1313]: time="2025-07-15T11:30:15.046097223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tnjd,Uid:5acc111e-02a1-439a-93ba-39e1bce08fb2,Namespace:kube-system,Attempt:0,}" Jul 15 11:30:15.052179 env[1313]: time="2025-07-15T11:30:15.052154250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c7897fc4-w24xn,Uid:0745c5fc-ce0f-47aa-8707-bacfa72cacb9,Namespace:calico-system,Attempt:0,}" Jul 15 11:30:15.059936 env[1313]: time="2025-07-15T11:30:15.059905274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-tnzhf,Uid:ad1dfeee-bd95-4e9e-b226-86afd94e0964,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:30:15.060157 env[1313]: time="2025-07-15T11:30:15.060119080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55867d57ff-4mhmr,Uid:01a942dc-88ea-4854-a722-52bcabc6d456,Namespace:calico-system,Attempt:0,}" Jul 15 11:30:15.064735 env[1313]: time="2025-07-15T11:30:15.064699302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-xsvx6,Uid:b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:30:15.064855 env[1313]: time="2025-07-15T11:30:15.064831583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-phmvm,Uid:9b356024-f0d5-45bf-a4bc-f2e9fe1afa45,Namespace:calico-system,Attempt:0,}" Jul 15 11:30:15.081529 env[1313]: time="2025-07-15T11:30:15.081485991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96lqs,Uid:ea4b49fb-f94a-4309-9631-1c291cb3db4b,Namespace:calico-system,Attempt:0,}" Jul 15 11:30:15.146483 env[1313]: time="2025-07-15T11:30:15.146404788Z" level=error msg="Failed to destroy network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.146984 env[1313]: time="2025-07-15T11:30:15.146954723Z" level=error msg="Failed to destroy network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.147144 env[1313]: time="2025-07-15T11:30:15.147095911Z" level=error msg="encountered an error cleaning up failed sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.147199 env[1313]: time="2025-07-15T11:30:15.147163009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pbq8g,Uid:7edd760f-4b3e-4f59-9e90-ee9828b261c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.147398 env[1313]: time="2025-07-15T11:30:15.147368088Z" level=error msg="encountered an error cleaning up failed sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.147465 kubelet[2092]: E0715 11:30:15.147387 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.147465 kubelet[2092]: E0715 11:30:15.147450 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pbq8g" Jul 15 11:30:15.147551 kubelet[2092]: E0715 11:30:15.147467 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pbq8g" Jul 15 11:30:15.147551 kubelet[2092]: E0715 11:30:15.147517 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pbq8g_kube-system(7edd760f-4b3e-4f59-9e90-ee9828b261c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pbq8g_kube-system(7edd760f-4b3e-4f59-9e90-ee9828b261c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pbq8g" podUID="7edd760f-4b3e-4f59-9e90-ee9828b261c3" Jul 15 11:30:15.147784 env[1313]: time="2025-07-15T11:30:15.147706972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tnjd,Uid:5acc111e-02a1-439a-93ba-39e1bce08fb2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.148307 kubelet[2092]: E0715 11:30:15.148259 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.148307 kubelet[2092]: E0715 11:30:15.148288 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7tnjd" Jul 15 11:30:15.148307 kubelet[2092]: E0715 11:30:15.148300 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7tnjd" Jul 15 11:30:15.148522 kubelet[2092]: E0715 11:30:15.148322 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7tnjd_kube-system(5acc111e-02a1-439a-93ba-39e1bce08fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7tnjd_kube-system(5acc111e-02a1-439a-93ba-39e1bce08fb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7tnjd" podUID="5acc111e-02a1-439a-93ba-39e1bce08fb2" Jul 15 11:30:15.202939 env[1313]: time="2025-07-15T11:30:15.202883872Z" level=error msg="Failed to destroy network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.203394 env[1313]: time="2025-07-15T11:30:15.203369013Z" level=error msg="encountered an error cleaning up failed sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.203525 env[1313]: time="2025-07-15T11:30:15.203487989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c7897fc4-w24xn,Uid:0745c5fc-ce0f-47aa-8707-bacfa72cacb9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.203742 env[1313]: time="2025-07-15T11:30:15.203437894Z" level=error msg="Failed to destroy network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.203965 env[1313]: time="2025-07-15T11:30:15.203942083Z" level=error msg="encountered an error cleaning up failed sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.204017 env[1313]: time="2025-07-15T11:30:15.203974574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-phmvm,Uid:9b356024-f0d5-45bf-a4bc-f2e9fe1afa45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.204070 kubelet[2092]: E0715 11:30:15.203933 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.204070 kubelet[2092]: E0715 11:30:15.203999 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" Jul 15 11:30:15.204070 kubelet[2092]: E0715 11:30:15.204017 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" Jul 15 11:30:15.204165 kubelet[2092]: E0715 11:30:15.204085 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78c7897fc4-w24xn_calico-system(0745c5fc-ce0f-47aa-8707-bacfa72cacb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78c7897fc4-w24xn_calico-system(0745c5fc-ce0f-47aa-8707-bacfa72cacb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" podUID="0745c5fc-ce0f-47aa-8707-bacfa72cacb9" Jul 15 11:30:15.204165 kubelet[2092]: E0715 11:30:15.204108 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.204238 kubelet[2092]: E0715 11:30:15.204160 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:15.204238 kubelet[2092]: E0715 11:30:15.204180 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-phmvm" Jul 15 11:30:15.204238 kubelet[2092]: E0715 11:30:15.204215 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-phmvm_calico-system(9b356024-f0d5-45bf-a4bc-f2e9fe1afa45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-phmvm_calico-system(9b356024-f0d5-45bf-a4bc-f2e9fe1afa45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-phmvm" podUID="9b356024-f0d5-45bf-a4bc-f2e9fe1afa45" Jul 15 11:30:15.206797 env[1313]: time="2025-07-15T11:30:15.206739577Z" level=error msg="Failed to destroy network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.207766 env[1313]: time="2025-07-15T11:30:15.207739797Z" level=error msg="encountered an error cleaning up failed sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.207884 env[1313]: time="2025-07-15T11:30:15.207855196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55867d57ff-4mhmr,Uid:01a942dc-88ea-4854-a722-52bcabc6d456,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.208267 kubelet[2092]: E0715 11:30:15.208129 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.208267 kubelet[2092]: E0715 11:30:15.208184 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55867d57ff-4mhmr" Jul 15 11:30:15.208267 kubelet[2092]: E0715 11:30:15.208202 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55867d57ff-4mhmr" Jul 15 11:30:15.208386 kubelet[2092]: E0715 11:30:15.208239 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55867d57ff-4mhmr_calico-system(01a942dc-88ea-4854-a722-52bcabc6d456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55867d57ff-4mhmr_calico-system(01a942dc-88ea-4854-a722-52bcabc6d456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55867d57ff-4mhmr" podUID="01a942dc-88ea-4854-a722-52bcabc6d456" Jul 15 11:30:15.225239 env[1313]: time="2025-07-15T11:30:15.225182442Z" level=error msg="Failed to destroy network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.225705 env[1313]: time="2025-07-15T11:30:15.225679728Z" level=error msg="encountered an error cleaning up failed sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.225855 env[1313]: time="2025-07-15T11:30:15.225798203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-xsvx6,Uid:b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.226042 kubelet[2092]: E0715 11:30:15.225999 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.226101 kubelet[2092]: E0715 11:30:15.226056 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" Jul 15 11:30:15.226101 kubelet[2092]: E0715 11:30:15.226075 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" Jul 15 11:30:15.226169 kubelet[2092]: E0715 11:30:15.226111 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c5cfffc-xsvx6_calico-apiserver(b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c5cfffc-xsvx6_calico-apiserver(b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" podUID="b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4" Jul 15 11:30:15.227040 env[1313]: time="2025-07-15T11:30:15.226980418Z" level=error msg="Failed to destroy network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.227319 env[1313]: time="2025-07-15T11:30:15.227289847Z" level=error msg="encountered an error cleaning up failed sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.227404 env[1313]: time="2025-07-15T11:30:15.227330754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-tnzhf,Uid:ad1dfeee-bd95-4e9e-b226-86afd94e0964,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.227508 kubelet[2092]: E0715 11:30:15.227442 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.227508 kubelet[2092]: E0715 11:30:15.227472 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" Jul 15 11:30:15.227508 kubelet[2092]: E0715 11:30:15.227487 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" Jul 15 11:30:15.227698 kubelet[2092]: E0715 11:30:15.227525 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c5cfffc-tnzhf_calico-apiserver(ad1dfeee-bd95-4e9e-b226-86afd94e0964)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c5cfffc-tnzhf_calico-apiserver(ad1dfeee-bd95-4e9e-b226-86afd94e0964)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" podUID="ad1dfeee-bd95-4e9e-b226-86afd94e0964" Jul 15 11:30:15.235659 env[1313]: time="2025-07-15T11:30:15.235593240Z" level=error msg="Failed to destroy network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.235927 env[1313]: time="2025-07-15T11:30:15.235889072Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.235977 env[1313]: time="2025-07-15T11:30:15.235927515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96lqs,Uid:ea4b49fb-f94a-4309-9631-1c291cb3db4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.236105 kubelet[2092]: E0715 11:30:15.236063 2092 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.236248 kubelet[2092]: E0715 11:30:15.236113 2092 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96lqs" Jul 15 11:30:15.236248 kubelet[2092]: E0715 11:30:15.236127 2092 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96lqs" Jul 15 11:30:15.236248 kubelet[2092]: E0715 11:30:15.236161 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-96lqs_calico-system(ea4b49fb-f94a-4309-9631-1c291cb3db4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-96lqs_calico-system(ea4b49fb-f94a-4309-9631-1c291cb3db4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:15.642259 kubelet[2092]: I0715 11:30:15.640991 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:15.642259 kubelet[2092]: I0715 11:30:15.641825 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:15.642711 env[1313]: time="2025-07-15T11:30:15.641617798Z" level=info msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" Jul 15 11:30:15.643045 env[1313]: time="2025-07-15T11:30:15.643017207Z" level=info msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" Jul 15 11:30:15.643907 kubelet[2092]: I0715 11:30:15.643675 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:15.644092 env[1313]: time="2025-07-15T11:30:15.644070488Z" level=info msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" Jul 15 11:30:15.647135 env[1313]: time="2025-07-15T11:30:15.647004651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 11:30:15.647900 kubelet[2092]: I0715 11:30:15.647461 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:15.648313 env[1313]: time="2025-07-15T11:30:15.648274584Z" level=info msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" Jul 15 11:30:15.649349 kubelet[2092]: I0715 11:30:15.649102 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:15.649574 env[1313]: time="2025-07-15T11:30:15.649541310Z" level=info msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" Jul 15 11:30:15.650443 kubelet[2092]: I0715 11:30:15.650179 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:15.650808 env[1313]: time="2025-07-15T11:30:15.650776998Z" level=info msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\"" Jul 15 11:30:15.651761 kubelet[2092]: I0715 11:30:15.651353 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:15.652045 env[1313]: time="2025-07-15T11:30:15.652007676Z" level=info msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" Jul 15 11:30:15.653093 kubelet[2092]: I0715 11:30:15.653049 2092 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:15.653572 env[1313]: time="2025-07-15T11:30:15.653542712Z" level=info msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\"" Jul 15 11:30:15.667221 env[1313]: time="2025-07-15T11:30:15.667171483Z" level=error msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" failed" error="failed to destroy network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.667579 kubelet[2092]: E0715 11:30:15.667363 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:15.667579 kubelet[2092]: E0715 11:30:15.667415 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387"} Jul 15 11:30:15.667579 kubelet[2092]: E0715 11:30:15.667472 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01a942dc-88ea-4854-a722-52bcabc6d456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.667579 kubelet[2092]: E0715 11:30:15.667504 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01a942dc-88ea-4854-a722-52bcabc6d456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55867d57ff-4mhmr" podUID="01a942dc-88ea-4854-a722-52bcabc6d456" Jul 15 11:30:15.691253 env[1313]: time="2025-07-15T11:30:15.691196673Z" level=error msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" failed" error="failed to destroy network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.692429 env[1313]: time="2025-07-15T11:30:15.692025078Z" level=error msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" failed" error="failed to destroy network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.692652 kubelet[2092]: E0715 11:30:15.692597 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:15.692719 kubelet[2092]: E0715 11:30:15.692662 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6"} Jul 15 11:30:15.692719 kubelet[2092]: E0715 11:30:15.692694 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.692719 kubelet[2092]: E0715 11:30:15.692603 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:15.692826 kubelet[2092]: E0715 11:30:15.692713 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" podUID="b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4" Jul 15 11:30:15.692826 kubelet[2092]: E0715 11:30:15.692743 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961"} Jul 15 11:30:15.692826 kubelet[2092]: E0715 11:30:15.692793 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7edd760f-4b3e-4f59-9e90-ee9828b261c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.692826 kubelet[2092]: E0715 11:30:15.692816 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7edd760f-4b3e-4f59-9e90-ee9828b261c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pbq8g" podUID="7edd760f-4b3e-4f59-9e90-ee9828b261c3" Jul 15 11:30:15.732269 env[1313]: time="2025-07-15T11:30:15.732202162Z" level=error msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" failed" error="failed to destroy network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.732561 kubelet[2092]: E0715 11:30:15.732516 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:15.732610 kubelet[2092]: E0715 11:30:15.732577 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157"} Jul 15 11:30:15.732652 kubelet[2092]: E0715 11:30:15.732622 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.732717 kubelet[2092]: E0715 11:30:15.732661 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea4b49fb-f94a-4309-9631-1c291cb3db4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-96lqs" podUID="ea4b49fb-f94a-4309-9631-1c291cb3db4b" Jul 15 11:30:15.732777 env[1313]: time="2025-07-15T11:30:15.732749061Z" level=error msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" failed" error="failed to destroy network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.732897 kubelet[2092]: E0715 11:30:15.732872 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:15.732934 kubelet[2092]: E0715 11:30:15.732923 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200"} Jul 15 11:30:15.732959 kubelet[2092]: E0715 11:30:15.732945 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad1dfeee-bd95-4e9e-b226-86afd94e0964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.733000 kubelet[2092]: E0715 11:30:15.732960 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad1dfeee-bd95-4e9e-b226-86afd94e0964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" podUID="ad1dfeee-bd95-4e9e-b226-86afd94e0964" Jul 15 11:30:15.733607 env[1313]: time="2025-07-15T11:30:15.733580361Z" level=error msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\" failed" error="failed to destroy network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.733836 kubelet[2092]: E0715 11:30:15.733807 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:15.733895 kubelet[2092]: E0715 11:30:15.733854 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba"} Jul 15 11:30:15.733895 kubelet[2092]: E0715 11:30:15.733875 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.733976 kubelet[2092]: E0715 11:30:15.733891 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" podUID="0745c5fc-ce0f-47aa-8707-bacfa72cacb9" Jul 15 11:30:15.734912 env[1313]: time="2025-07-15T11:30:15.734852818Z" level=error msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" failed" error="failed to destroy network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.735239 kubelet[2092]: E0715 11:30:15.735215 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:15.735239 kubelet[2092]: E0715 11:30:15.735239 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57"} Jul 15 11:30:15.735341 kubelet[2092]: E0715 11:30:15.735259 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5acc111e-02a1-439a-93ba-39e1bce08fb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.735341 kubelet[2092]: E0715 11:30:15.735276 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5acc111e-02a1-439a-93ba-39e1bce08fb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7tnjd" podUID="5acc111e-02a1-439a-93ba-39e1bce08fb2" Jul 15 11:30:15.739581 env[1313]: time="2025-07-15T11:30:15.739530334Z" level=error msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\" failed" error="failed to destroy network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:15.739747 kubelet[2092]: E0715 11:30:15.739723 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:15.739814 kubelet[2092]: E0715 11:30:15.739748 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3"} Jul 15 11:30:15.739814 kubelet[2092]: E0715 11:30:15.739768 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:15.739814 kubelet[2092]: E0715 11:30:15.739795 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-phmvm" podUID="9b356024-f0d5-45bf-a4bc-f2e9fe1afa45" Jul 15 11:30:18.875514 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:44928.service. Jul 15 11:30:18.880882 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:30:18.880920 kernel: audit: type=1130 audit(1752579018.873:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:44928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:18.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:44928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:18.918000 audit[3373]: USER_ACCT pid=3373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.919982 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:18.923000 audit[3373]: CRED_ACQ pid=3373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.925441 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:18.928929 systemd-logind[1289]: New session 9 of user core. Jul 15 11:30:18.930232 kernel: audit: type=1101 audit(1752579018.918:290): pid=3373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.930345 kernel: audit: type=1103 audit(1752579018.923:291): pid=3373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.930374 kernel: audit: type=1006 audit(1752579018.923:292): pid=3373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 15 11:30:18.929901 systemd[1]: Started session-9.scope. Jul 15 11:30:18.923000 audit[3373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda9d6d1d0 a2=3 a3=0 items=0 ppid=1 pid=3373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:18.923000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:18.942210 kernel: audit: type=1300 audit(1752579018.923:292): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda9d6d1d0 a2=3 a3=0 items=0 ppid=1 pid=3373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:18.942262 kernel: audit: type=1327 audit(1752579018.923:292): proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:18.942288 kernel: audit: type=1105 audit(1752579018.933:293): pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.933000 audit[3373]: USER_START pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.947720 kernel: audit: type=1103 audit(1752579018.938:294): pid=3376 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:18.938000 audit[3376]: CRED_ACQ pid=3376 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:19.056747 sshd[3373]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:19.056000 audit[3373]: USER_END pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:19.059175 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:44928.service: Deactivated successfully. Jul 15 11:30:19.060382 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:30:19.060887 systemd-logind[1289]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:30:19.061729 systemd-logind[1289]: Removed session 9. Jul 15 11:30:19.056000 audit[3373]: CRED_DISP pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:19.068663 kernel: audit: type=1106 audit(1752579019.056:295): pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:19.068719 kernel: audit: type=1104 audit(1752579019.056:296): pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:19.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:44928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:24.065251 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:30:24.065395 kernel: audit: type=1130 audit(1752579024.059:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.41:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:24.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.41:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:24.059631 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:41698.service. Jul 15 11:30:24.096000 audit[3388]: USER_ACCT pid=3388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.106158 kernel: audit: type=1101 audit(1752579024.096:299): pid=3388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.106197 kernel: audit: type=1103 audit(1752579024.099:300): pid=3388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.106218 kernel: audit: type=1006 audit(1752579024.099:301): pid=3388 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 15 11:30:24.099000 audit[3388]: CRED_ACQ pid=3388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.106284 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:24.100922 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:24.104671 systemd[1]: Started session-10.scope. Jul 15 11:30:24.105698 systemd-logind[1289]: New session 10 of user core. Jul 15 11:30:24.118987 kernel: audit: type=1300 audit(1752579024.099:301): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3de2a140 a2=3 a3=0 items=0 ppid=1 pid=3388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:24.119037 kernel: audit: type=1327 audit(1752579024.099:301): proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:24.119058 kernel: audit: type=1105 audit(1752579024.109:302): pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.119077 kernel: audit: type=1103 audit(1752579024.110:303): pid=3391 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.099000 audit[3388]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3de2a140 a2=3 a3=0 items=0 ppid=1 pid=3388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:24.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:24.109000 audit[3388]: USER_START pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.110000 audit[3391]: CRED_ACQ pid=3391 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:24.123505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476843793.mount: Deactivated successfully. Jul 15 11:30:26.078752 env[1313]: time="2025-07-15T11:30:26.078709970Z" level=info msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\"" Jul 15 11:30:26.342969 env[1313]: time="2025-07-15T11:30:26.342846381Z" level=error msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\" failed" error="failed to destroy network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:26.343193 kubelet[2092]: E0715 11:30:26.343107 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:26.343193 kubelet[2092]: E0715 11:30:26.343189 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3"} Jul 15 11:30:26.343595 kubelet[2092]: E0715 11:30:26.343235 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:26.343595 kubelet[2092]: E0715 11:30:26.343264 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-phmvm" podUID="9b356024-f0d5-45bf-a4bc-f2e9fe1afa45" Jul 15 11:30:26.376818 sshd[3388]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:26.377000 audit[3388]: USER_END pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:26.380068 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:41698.service: Deactivated successfully. Jul 15 11:30:26.381308 systemd-logind[1289]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:30:26.381498 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:30:26.382303 systemd-logind[1289]: Removed session 10. Jul 15 11:30:26.387454 kernel: audit: type=1106 audit(1752579026.377:304): pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:26.387577 kernel: audit: type=1104 audit(1752579026.377:305): pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:26.377000 audit[3388]: CRED_DISP pid=3388 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:26.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.41:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:26.394733 env[1313]: time="2025-07-15T11:30:26.394679400Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:26.397300 env[1313]: time="2025-07-15T11:30:26.397258374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:26.399408 env[1313]: time="2025-07-15T11:30:26.399376915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:26.401023 env[1313]: time="2025-07-15T11:30:26.400976303Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:26.401251 env[1313]: time="2025-07-15T11:30:26.401213672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 11:30:26.407913 env[1313]: time="2025-07-15T11:30:26.407873782Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 11:30:26.424766 env[1313]: time="2025-07-15T11:30:26.424723555Z" level=info msg="CreateContainer within sandbox \"add2c5ed8931eec62879dcd8de200ddc32b5920e601fdd5e0d20a090bbff61c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"024c5805f6ca4db709dc9520f88da646aaf11b11913a43e40f08a301bc93ecce\"" Jul 15 11:30:26.425526 env[1313]: time="2025-07-15T11:30:26.425505777Z" level=info msg="StartContainer for \"024c5805f6ca4db709dc9520f88da646aaf11b11913a43e40f08a301bc93ecce\"" Jul 15 11:30:26.671381 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 11:30:26.671516 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 11:30:27.078336 env[1313]: time="2025-07-15T11:30:27.078209437Z" level=info msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\"" Jul 15 11:30:27.099795 env[1313]: time="2025-07-15T11:30:27.099735315Z" level=error msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\" failed" error="failed to destroy network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:30:27.100174 kubelet[2092]: E0715 11:30:27.099941 2092 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:27.100174 kubelet[2092]: E0715 11:30:27.099985 2092 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba"} Jul 15 11:30:27.100174 kubelet[2092]: E0715 11:30:27.100015 2092 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:30:27.100174 kubelet[2092]: E0715 11:30:27.100035 2092 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0745c5fc-ce0f-47aa-8707-bacfa72cacb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" podUID="0745c5fc-ce0f-47aa-8707-bacfa72cacb9" Jul 15 11:30:27.141002 env[1313]: time="2025-07-15T11:30:27.140944584Z" level=info msg="StartContainer for \"024c5805f6ca4db709dc9520f88da646aaf11b11913a43e40f08a301bc93ecce\" returns successfully" Jul 15 11:30:27.253625 env[1313]: time="2025-07-15T11:30:27.253583009Z" level=info msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.325 [INFO][3507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.325 [INFO][3507] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" iface="eth0" netns="/var/run/netns/cni-40e8535c-db2c-2939-c27c-01153ae6c990" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.325 [INFO][3507] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" iface="eth0" netns="/var/run/netns/cni-40e8535c-db2c-2939-c27c-01153ae6c990" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.326 [INFO][3507] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" iface="eth0" netns="/var/run/netns/cni-40e8535c-db2c-2939-c27c-01153ae6c990" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.326 [INFO][3507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.326 [INFO][3507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.376 [INFO][3516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.377 [INFO][3516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.377 [INFO][3516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.382 [WARNING][3516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.382 [INFO][3516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.383 [INFO][3516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:27.387323 env[1313]: 2025-07-15 11:30:27.385 [INFO][3507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:27.387995 env[1313]: time="2025-07-15T11:30:27.387956488Z" level=info msg="TearDown network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" successfully" Jul 15 11:30:27.387995 env[1313]: time="2025-07-15T11:30:27.387992095Z" level=info msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" returns successfully" Jul 15 11:30:27.407358 systemd[1]: run-netns-cni\x2d40e8535c\x2ddb2c\x2d2939\x2dc27c\x2d01153ae6c990.mount: Deactivated successfully. Jul 15 11:30:27.485484 kubelet[2092]: I0715 11:30:27.485440 2092 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-backend-key-pair\") pod \"01a942dc-88ea-4854-a722-52bcabc6d456\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " Jul 15 11:30:27.485484 kubelet[2092]: I0715 11:30:27.485486 2092 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfv6f\" (UniqueName: \"kubernetes.io/projected/01a942dc-88ea-4854-a722-52bcabc6d456-kube-api-access-rfv6f\") pod \"01a942dc-88ea-4854-a722-52bcabc6d456\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " Jul 15 11:30:27.485977 kubelet[2092]: I0715 11:30:27.485508 2092 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-ca-bundle\") pod \"01a942dc-88ea-4854-a722-52bcabc6d456\" (UID: \"01a942dc-88ea-4854-a722-52bcabc6d456\") " Jul 15 11:30:27.485977 kubelet[2092]: I0715 11:30:27.485920 2092 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "01a942dc-88ea-4854-a722-52bcabc6d456" (UID: "01a942dc-88ea-4854-a722-52bcabc6d456"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:30:27.488453 kubelet[2092]: I0715 11:30:27.488398 2092 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "01a942dc-88ea-4854-a722-52bcabc6d456" (UID: "01a942dc-88ea-4854-a722-52bcabc6d456"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:30:27.488610 kubelet[2092]: I0715 11:30:27.488518 2092 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a942dc-88ea-4854-a722-52bcabc6d456-kube-api-access-rfv6f" (OuterVolumeSpecName: "kube-api-access-rfv6f") pod "01a942dc-88ea-4854-a722-52bcabc6d456" (UID: "01a942dc-88ea-4854-a722-52bcabc6d456"). InnerVolumeSpecName "kube-api-access-rfv6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:30:27.490995 systemd[1]: var-lib-kubelet-pods-01a942dc\x2d88ea\x2d4854\x2da722\x2d52bcabc6d456-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drfv6f.mount: Deactivated successfully. Jul 15 11:30:27.491168 systemd[1]: var-lib-kubelet-pods-01a942dc\x2d88ea\x2d4854\x2da722\x2d52bcabc6d456-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 11:30:27.586265 kubelet[2092]: I0715 11:30:27.586210 2092 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:27.586265 kubelet[2092]: I0715 11:30:27.586245 2092 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01a942dc-88ea-4854-a722-52bcabc6d456-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:27.586265 kubelet[2092]: I0715 11:30:27.586253 2092 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfv6f\" (UniqueName: \"kubernetes.io/projected/01a942dc-88ea-4854-a722-52bcabc6d456-kube-api-access-rfv6f\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:28.079147 env[1313]: time="2025-07-15T11:30:28.079100169Z" level=info msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" Jul 15 11:30:28.079364 env[1313]: time="2025-07-15T11:30:28.079328621Z" level=info msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" Jul 15 11:30:28.185845 kubelet[2092]: I0715 11:30:28.185205 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v8t5b" podStartSLOduration=3.430976512 podStartE2EDuration="32.185152197s" podCreationTimestamp="2025-07-15 11:29:56 +0000 UTC" firstStartedPulling="2025-07-15 11:29:57.6478895 +0000 UTC m=+19.675888734" lastFinishedPulling="2025-07-15 11:30:26.402065195 +0000 UTC m=+48.430064419" observedRunningTime="2025-07-15 11:30:28.168113459 +0000 UTC m=+50.196112693" watchObservedRunningTime="2025-07-15 11:30:28.185152197 +0000 UTC m=+50.213151421" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" iface="eth0" netns="/var/run/netns/cni-57badef6-1fa9-9bdd-42e3-d3f4adc8fe88" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" iface="eth0" netns="/var/run/netns/cni-57badef6-1fa9-9bdd-42e3-d3f4adc8fe88" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" iface="eth0" netns="/var/run/netns/cni-57badef6-1fa9-9bdd-42e3-d3f4adc8fe88" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.124 [INFO][3550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.169 [INFO][3567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.170 [INFO][3567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.170 [INFO][3567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.187 [WARNING][3567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.187 [INFO][3567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.188 [INFO][3567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:28.204935 env[1313]: 2025-07-15 11:30:28.196 [INFO][3550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:28.209488 systemd[1]: run-netns-cni\x2d57badef6\x2d1fa9\x2d9bdd\x2d42e3\x2dd3f4adc8fe88.mount: Deactivated successfully. Jul 15 11:30:28.211584 env[1313]: time="2025-07-15T11:30:28.211541022Z" level=info msg="TearDown network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" successfully" Jul 15 11:30:28.211708 env[1313]: time="2025-07-15T11:30:28.211687428Z" level=info msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" returns successfully" Jul 15 11:30:28.217161 env[1313]: time="2025-07-15T11:30:28.217119048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96lqs,Uid:ea4b49fb-f94a-4309-9631-1c291cb3db4b,Namespace:calico-system,Attempt:1,}" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" iface="eth0" netns="/var/run/netns/cni-2d98caa9-19d5-214a-d116-205483d00754" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" iface="eth0" netns="/var/run/netns/cni-2d98caa9-19d5-214a-d116-205483d00754" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" iface="eth0" netns="/var/run/netns/cni-2d98caa9-19d5-214a-d116-205483d00754" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.136 [INFO][3549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.220 [INFO][3574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.220 [INFO][3574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.221 [INFO][3574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.241 [WARNING][3574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.241 [INFO][3574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.243 [INFO][3574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:28.253589 env[1313]: 2025-07-15 11:30:28.251 [INFO][3549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:28.254354 env[1313]: time="2025-07-15T11:30:28.253706035Z" level=info msg="TearDown network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" successfully" Jul 15 11:30:28.254354 env[1313]: time="2025-07-15T11:30:28.253737234Z" level=info msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" returns successfully" Jul 15 11:30:28.254782 env[1313]: time="2025-07-15T11:30:28.254735283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-tnzhf,Uid:ad1dfeee-bd95-4e9e-b226-86afd94e0964,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:30:28.290892 kubelet[2092]: I0715 11:30:28.290850 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hld\" (UniqueName: \"kubernetes.io/projected/cc0f52e8-32d3-4694-9c13-325ba0936b35-kube-api-access-52hld\") pod \"whisker-79d6476997-mkkkn\" (UID: \"cc0f52e8-32d3-4694-9c13-325ba0936b35\") " pod="calico-system/whisker-79d6476997-mkkkn" Jul 15 11:30:28.291051 kubelet[2092]: I0715 11:30:28.290924 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0f52e8-32d3-4694-9c13-325ba0936b35-whisker-ca-bundle\") pod \"whisker-79d6476997-mkkkn\" (UID: \"cc0f52e8-32d3-4694-9c13-325ba0936b35\") " pod="calico-system/whisker-79d6476997-mkkkn" Jul 15 11:30:28.291051 kubelet[2092]: I0715 11:30:28.290953 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc0f52e8-32d3-4694-9c13-325ba0936b35-whisker-backend-key-pair\") pod \"whisker-79d6476997-mkkkn\" (UID: \"cc0f52e8-32d3-4694-9c13-325ba0936b35\") " pod="calico-system/whisker-79d6476997-mkkkn" Jul 15 11:30:28.409383 systemd[1]: run-netns-cni\x2d2d98caa9\x2d19d5\x2d214a\x2dd116\x2d205483d00754.mount: Deactivated successfully. Jul 15 11:30:28.657874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:30:28.657988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6f7d0e49b59: link becomes ready Jul 15 11:30:28.658293 systemd-networkd[1077]: cali6f7d0e49b59: Link UP Jul 15 11:30:28.658517 systemd-networkd[1077]: cali6f7d0e49b59: Gained carrier Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.272 [INFO][3617] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.283 [INFO][3617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--96lqs-eth0 csi-node-driver- calico-system ea4b49fb-f94a-4309-9631-1c291cb3db4b 977 0 2025-07-15 11:29:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-96lqs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6f7d0e49b59 [] [] }} ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.283 [INFO][3617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.307 [INFO][3644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" HandleID="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.307 [INFO][3644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" HandleID="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d70f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-96lqs", "timestamp":"2025-07-15 11:30:28.307591401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.307 [INFO][3644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.307 [INFO][3644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.308 [INFO][3644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.315 [INFO][3644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.322 [INFO][3644] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.329 [INFO][3644] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.334 [INFO][3644] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.336 [INFO][3644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.336 [INFO][3644] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.337 [INFO][3644] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5 Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.465 [INFO][3644] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.640 [INFO][3644] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.640 [INFO][3644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" host="localhost" Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.640 [INFO][3644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:28.672373 env[1313]: 2025-07-15 11:30:28.640 [INFO][3644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" HandleID="k8s-pod-network.cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.645 [INFO][3617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea4b49fb-f94a-4309-9631-1c291cb3db4b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-96lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f7d0e49b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.645 [INFO][3617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.645 [INFO][3617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f7d0e49b59 ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.659 [INFO][3617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.660 [INFO][3617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea4b49fb-f94a-4309-9631-1c291cb3db4b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5", Pod:"csi-node-driver-96lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f7d0e49b59", MAC:"8e:e0:9e:40:57:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.673041 env[1313]: 2025-07-15 11:30:28.670 [INFO][3617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5" Namespace="calico-system" Pod="csi-node-driver-96lqs" WorkloadEndpoint="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:28.685763 env[1313]: time="2025-07-15T11:30:28.685707490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:28.686150 env[1313]: time="2025-07-15T11:30:28.686124038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:28.686253 env[1313]: time="2025-07-15T11:30:28.686225811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:28.687114 env[1313]: time="2025-07-15T11:30:28.686985478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5 pid=3682 runtime=io.containerd.runc.v2 Jul 15 11:30:28.742586 systemd-networkd[1077]: califdeddd48904: Link UP Jul 15 11:30:28.744335 systemd-networkd[1077]: califdeddd48904: Gained carrier Jul 15 11:30:28.744660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califdeddd48904: link becomes ready Jul 15 11:30:28.745778 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.288 [INFO][3630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.299 [INFO][3630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0 calico-apiserver-77c5cfffc- calico-apiserver ad1dfeee-bd95-4e9e-b226-86afd94e0964 980 0 2025-07-15 11:29:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c5cfffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c5cfffc-tnzhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califdeddd48904 [] [] }} ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.299 [INFO][3630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.326 [INFO][3654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" HandleID="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.326 [INFO][3654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" HandleID="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013b770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c5cfffc-tnzhf", "timestamp":"2025-07-15 11:30:28.326540164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.327 [INFO][3654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.640 [INFO][3654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.640 [INFO][3654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.650 [INFO][3654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.665 [INFO][3654] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.671 [INFO][3654] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.684 [INFO][3654] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.686 [INFO][3654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.686 [INFO][3654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.693 [INFO][3654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880 Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.709 [INFO][3654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.718 [INFO][3654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.718 [INFO][3654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" host="localhost" Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.718 [INFO][3654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:28.759358 env[1313]: 2025-07-15 11:30:28.718 [INFO][3654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" HandleID="k8s-pod-network.99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.740 [INFO][3630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1dfeee-bd95-4e9e-b226-86afd94e0964", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c5cfffc-tnzhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdeddd48904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.740 [INFO][3630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.740 [INFO][3630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdeddd48904 ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.744 [INFO][3630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.747 [INFO][3630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1dfeee-bd95-4e9e-b226-86afd94e0964", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880", Pod:"calico-apiserver-77c5cfffc-tnzhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdeddd48904", MAC:"16:04:fe:82:26:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.759983 env[1313]: 2025-07-15 11:30:28.755 [INFO][3630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-tnzhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:28.760262 env[1313]: time="2025-07-15T11:30:28.759678460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96lqs,Uid:ea4b49fb-f94a-4309-9631-1c291cb3db4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5\"" Jul 15 11:30:28.761863 env[1313]: time="2025-07-15T11:30:28.761833769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 11:30:28.773250 env[1313]: time="2025-07-15T11:30:28.773095794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:28.773250 env[1313]: time="2025-07-15T11:30:28.773132554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:28.773250 env[1313]: time="2025-07-15T11:30:28.773147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:28.773436 env[1313]: time="2025-07-15T11:30:28.773319277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880 pid=3731 runtime=io.containerd.runc.v2 Jul 15 11:30:28.796997 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:28.818554 env[1313]: time="2025-07-15T11:30:28.818513454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-tnzhf,Uid:ad1dfeee-bd95-4e9e-b226-86afd94e0964,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880\"" Jul 15 11:30:28.830726 env[1313]: time="2025-07-15T11:30:28.830689739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d6476997-mkkkn,Uid:cc0f52e8-32d3-4694-9c13-325ba0936b35,Namespace:calico-system,Attempt:0,}" Jul 15 11:30:28.918473 systemd-networkd[1077]: cali67f8c405a2d: Link UP Jul 15 11:30:28.919664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali67f8c405a2d: link becomes ready Jul 15 11:30:28.919765 systemd-networkd[1077]: cali67f8c405a2d: Gained carrier Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.857 [INFO][3767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.867 [INFO][3767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79d6476997--mkkkn-eth0 whisker-79d6476997- calico-system cc0f52e8-32d3-4694-9c13-325ba0936b35 996 0 2025-07-15 11:30:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79d6476997 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79d6476997-mkkkn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali67f8c405a2d [] [] }} ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.867 [INFO][3767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.886 [INFO][3780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" HandleID="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Workload="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.886 [INFO][3780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" HandleID="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Workload="localhost-k8s-whisker--79d6476997--mkkkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79d6476997-mkkkn", "timestamp":"2025-07-15 11:30:28.88623695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.886 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.886 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.886 [INFO][3780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.891 [INFO][3780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.895 [INFO][3780] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.899 [INFO][3780] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.900 [INFO][3780] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.902 [INFO][3780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.902 [INFO][3780] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.903 [INFO][3780] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.906 [INFO][3780] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.915 [INFO][3780] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.915 [INFO][3780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" host="localhost" Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.915 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:28.972857 env[1313]: 2025-07-15 11:30:28.915 [INFO][3780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" HandleID="k8s-pod-network.57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Workload="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.917 [INFO][3767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d6476997--mkkkn-eth0", GenerateName:"whisker-79d6476997-", Namespace:"calico-system", SelfLink:"", UID:"cc0f52e8-32d3-4694-9c13-325ba0936b35", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d6476997", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79d6476997-mkkkn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67f8c405a2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.917 [INFO][3767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.917 [INFO][3767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67f8c405a2d ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.920 [INFO][3767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.920 [INFO][3767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d6476997--mkkkn-eth0", GenerateName:"whisker-79d6476997-", Namespace:"calico-system", SelfLink:"", UID:"cc0f52e8-32d3-4694-9c13-325ba0936b35", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d6476997", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd", Pod:"whisker-79d6476997-mkkkn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67f8c405a2d", MAC:"32:8f:00:11:83:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:28.974015 env[1313]: 2025-07-15 11:30:28.971 [INFO][3767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd" Namespace="calico-system" Pod="whisker-79d6476997-mkkkn" WorkloadEndpoint="localhost-k8s-whisker--79d6476997--mkkkn-eth0" Jul 15 11:30:28.984593 env[1313]: time="2025-07-15T11:30:28.984501550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:28.984593 env[1313]: time="2025-07-15T11:30:28.984554791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:28.984593 env[1313]: time="2025-07-15T11:30:28.984566332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:28.984983 env[1313]: time="2025-07-15T11:30:28.984907268Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd pid=3802 runtime=io.containerd.runc.v2 Jul 15 11:30:29.005522 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:29.029309 env[1313]: time="2025-07-15T11:30:29.029256646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d6476997-mkkkn,Uid:cc0f52e8-32d3-4694-9c13-325ba0936b35,Namespace:calico-system,Attempt:0,} returns sandbox id \"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd\"" Jul 15 11:30:29.203948 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:30:29.204099 kernel: audit: type=1400 audit(1752579029.199:307): avc: denied { write } for pid=3885 comm="tee" name="fd" dev="proc" ino=27014 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.199000 audit[3885]: AVC avc: denied { write } for pid=3885 comm="tee" name="fd" dev="proc" ino=27014 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.208809 kernel: audit: type=1300 audit(1752579029.199:307): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3b51a7dd a2=241 a3=1b6 items=1 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.199000 audit[3885]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3b51a7dd a2=241 a3=1b6 items=1 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.210337 kernel: audit: type=1307 audit(1752579029.199:307): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 15 11:30:29.199000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 15 11:30:29.213351 kernel: audit: type=1302 audit(1752579029.199:307): item=0 name="/dev/fd/63" inode=24477 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.213500 kernel: audit: type=1327 audit(1752579029.199:307): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.199000 audit: PATH item=0 name="/dev/fd/63" inode=24477 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.208000 audit[3899]: AVC avc: denied { write } for pid=3899 comm="tee" name="fd" dev="proc" ino=24488 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.218880 kernel: audit: type=1400 audit(1752579029.208:308): avc: denied { write } for pid=3899 comm="tee" name="fd" dev="proc" ino=24488 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.208000 audit[3899]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeda5407ed a2=241 a3=1b6 items=1 ppid=3865 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.223667 kernel: audit: type=1300 audit(1752579029.208:308): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeda5407ed a2=241 a3=1b6 items=1 ppid=3865 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.225516 kernel: audit: type=1307 audit(1752579029.208:308): cwd="/etc/service/enabled/confd/log" Jul 15 11:30:29.208000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 15 11:30:29.228678 kernel: audit: type=1302 audit(1752579029.208:308): item=0 name="/dev/fd/63" inode=25971 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.208000 audit: PATH item=0 name="/dev/fd/63" inode=25971 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.231497 kernel: audit: type=1327 audit(1752579029.208:308): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.238000 audit[3918]: AVC avc: denied { write } for pid=3918 comm="tee" name="fd" dev="proc" ino=25978 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.238000 audit[3918]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeec6537ee a2=241 a3=1b6 items=1 ppid=3888 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.238000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 15 11:30:29.238000 audit: PATH item=0 name="/dev/fd/63" inode=27016 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.242000 audit[3930]: AVC avc: denied { write } for pid=3930 comm="tee" name="fd" dev="proc" ino=25984 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.242000 audit[3930]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce23c17de a2=241 a3=1b6 items=1 ppid=3881 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.242000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 15 11:30:29.242000 audit: PATH item=0 name="/dev/fd/63" inode=25207 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.245000 audit[3914]: AVC avc: denied { write } for pid=3914 comm="tee" name="fd" dev="proc" ino=25212 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.245000 audit[3914]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdd7d57ed a2=241 a3=1b6 items=1 ppid=3870 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.245000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 15 11:30:29.245000 audit: PATH item=0 name="/dev/fd/63" inode=25206 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.258000 audit[3945]: AVC avc: denied { write } for pid=3945 comm="tee" name="fd" dev="proc" ino=25992 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.258000 audit[3945]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd629e7ef a2=241 a3=1b6 items=1 ppid=3872 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.258000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 15 11:30:29.258000 audit: PATH item=0 name="/dev/fd/63" inode=27019 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.265000 audit[3943]: AVC avc: denied { write } for pid=3943 comm="tee" name="fd" dev="proc" ino=24503 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:30:29.265000 audit[3943]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe8e14d7ed a2=241 a3=1b6 items=1 ppid=3866 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.265000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 15 11:30:29.265000 audit: PATH item=0 name="/dev/fd/63" inode=24500 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:30:29.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit: BPF prog-id=10 op=LOAD Jul 15 11:30:29.358000 audit[3961]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff717982e0 a2=98 a3=1fffffffffffffff items=0 ppid=3869 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.358000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:30:29.358000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit: BPF prog-id=11 op=LOAD Jul 15 11:30:29.358000 audit[3961]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff717981c0 a2=94 a3=3 items=0 ppid=3869 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.358000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:30:29.358000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { bpf } for pid=3961 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit: BPF prog-id=12 op=LOAD Jul 15 11:30:29.358000 audit[3961]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff71798200 a2=94 a3=7fff717983e0 items=0 ppid=3869 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.358000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:30:29.358000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:30:29.358000 audit[3961]: AVC avc: denied { perfmon } for pid=3961 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.358000 audit[3961]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff717982d0 a2=50 a3=a000000085 items=0 ppid=3869 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.358000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit: BPF prog-id=13 op=LOAD Jul 15 11:30:29.363000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffebd3e330 a2=98 a3=3 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.363000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.363000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit: BPF prog-id=14 op=LOAD Jul 15 11:30:29.363000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffebd3e120 a2=94 a3=54428f items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.363000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.363000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.363000 audit: BPF prog-id=15 op=LOAD Jul 15 11:30:29.363000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffebd3e150 a2=94 a3=2 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.363000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.363000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit: BPF prog-id=16 op=LOAD Jul 15 11:30:29.489000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffebd3e010 a2=94 a3=1 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.489000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:30:29.489000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.489000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffebd3e0e0 a2=50 a3=7fffebd3e1c0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e020 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffebd3e050 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffebd3df60 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e070 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e050 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e040 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e070 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffebd3e050 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffebd3e070 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffebd3e040 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffebd3e0b0 a2=28 a3=0 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffebd3de60 a2=50 a3=1 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit: BPF prog-id=17 op=LOAD Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffebd3de60 a2=94 a3=5 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffebd3df10 a2=50 a3=1 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffebd3e030 a2=4 a3=38 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.498000 audit[3962]: AVC avc: denied { confidentiality } for pid=3962 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.498000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffebd3e080 a2=94 a3=6 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { confidentiality } for pid=3962 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.499000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffebd3d830 a2=94 a3=88 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.499000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { perfmon } for pid=3962 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { bpf } for pid=3962 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.499000 audit[3962]: AVC avc: denied { confidentiality } for pid=3962 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.499000 audit[3962]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffebd3d830 a2=94 a3=88 items=0 ppid=3869 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.499000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit: BPF prog-id=18 op=LOAD Jul 15 11:30:29.506000 audit[3982]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa990dd30 a2=98 a3=1999999999999999 items=0 ppid=3869 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.506000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:30:29.506000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit: BPF prog-id=19 op=LOAD Jul 15 11:30:29.506000 audit[3982]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa990dc10 a2=94 a3=ffff items=0 ppid=3869 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.506000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:30:29.506000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { perfmon } for pid=3982 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit[3982]: AVC avc: denied { bpf } for pid=3982 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.506000 audit: BPF prog-id=20 op=LOAD Jul 15 11:30:29.506000 audit[3982]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa990dc50 a2=94 a3=7fffa990de30 items=0 ppid=3869 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.506000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:30:29.506000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:30:29.557491 systemd-networkd[1077]: vxlan.calico: Link UP Jul 15 11:30:29.557500 systemd-networkd[1077]: vxlan.calico: Gained carrier Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit: BPF prog-id=21 op=LOAD Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd359d910 a2=98 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit: BPF prog-id=22 op=LOAD Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd359d720 a2=94 a3=54428f items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit: BPF prog-id=23 op=LOAD Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd359d750 a2=94 a3=2 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d620 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd359d650 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd359d560 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d670 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d650 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d640 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d670 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd359d650 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd359d670 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd359d640 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcd359d6b0 a2=28 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.574000 audit: BPF prog-id=24 op=LOAD Jul 15 11:30:29.574000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd359d520 a2=94 a3=0 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.574000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffcd359d510 a2=50 a3=2800 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.575000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffcd359d510 a2=50 a3=2800 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.575000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit: BPF prog-id=25 op=LOAD Jul 15 11:30:29.575000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd359cd30 a2=94 a3=2 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.575000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.575000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { perfmon } for pid=4009 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit[4009]: AVC avc: denied { bpf } for pid=4009 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.575000 audit: BPF prog-id=26 op=LOAD Jul 15 11:30:29.575000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd359ce30 a2=94 a3=30 items=0 ppid=3869 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.575000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit: BPF prog-id=27 op=LOAD Jul 15 11:30:29.582000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34aaa600 a2=98 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.582000 audit: BPF prog-id=27 op=UNLOAD Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit: BPF prog-id=28 op=LOAD Jul 15 11:30:29.582000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe34aaa3f0 a2=94 a3=54428f items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.582000 audit: BPF prog-id=28 op=UNLOAD Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.582000 audit: BPF prog-id=29 op=LOAD Jul 15 11:30:29.582000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe34aaa420 a2=94 a3=2 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.582000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.583000 audit: BPF prog-id=29 op=UNLOAD Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit: BPF prog-id=30 op=LOAD Jul 15 11:30:29.683000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe34aaa2e0 a2=94 a3=1 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.683000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.683000 audit: BPF prog-id=30 op=UNLOAD Jul 15 11:30:29.683000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.683000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe34aaa3b0 a2=50 a3=7ffe34aaa490 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.683000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa2f0 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34aaa320 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34aaa230 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa340 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa320 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa310 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa340 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34aaa320 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34aaa340 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34aaa310 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe34aaa380 a2=28 a3=0 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe34aaa130 a2=50 a3=1 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit: BPF prog-id=31 op=LOAD Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe34aaa130 a2=94 a3=5 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit: BPF prog-id=31 op=UNLOAD Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe34aaa1e0 a2=50 a3=1 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe34aaa300 a2=4 a3=38 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.692000 audit[4017]: AVC avc: denied { confidentiality } for pid=4017 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.692000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe34aaa350 a2=94 a3=6 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.692000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { confidentiality } for pid=4017 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe34aa9b00 a2=94 a3=88 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { confidentiality } for pid=4017 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe34aa9b00 a2=94 a3=88 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe34aab530 a2=10 a3=208 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe34aab3d0 a2=10 a3=3 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe34aab370 a2=10 a3=3 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.693000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:30:29.693000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe34aab370 a2=10 a3=7 items=0 ppid=3869 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.693000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:30:29.700000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:30:29.745000 audit[4042]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4042 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:29.745000 audit[4042]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff0cb23460 a2=0 a3=7fff0cb2344c items=0 ppid=3869 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.745000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:29.753000 audit[4043]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4043 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:29.753000 audit[4043]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc7ba82b60 a2=0 a3=7ffc7ba82b4c items=0 ppid=3869 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.753000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:29.759000 audit[4041]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4041 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:29.759000 audit[4041]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcdecbc790 a2=0 a3=7ffcdecbc77c items=0 ppid=3869 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:29.760000 audit[4046]: NETFILTER_CFG table=filter:104 family=2 entries=170 op=nft_register_chain pid=4046 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:29.760000 audit[4046]: SYSCALL arch=c000003e syscall=46 success=yes exit=97952 a0=3 a1=7ffcb4337990 a2=0 a3=7ffcb433797c items=0 ppid=3869 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:29.760000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:30.078615 env[1313]: time="2025-07-15T11:30:30.078493679Z" level=info msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" Jul 15 11:30:30.079025 env[1313]: time="2025-07-15T11:30:30.078899447Z" level=info msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" Jul 15 11:30:30.079025 env[1313]: time="2025-07-15T11:30:30.079011599Z" level=info msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" Jul 15 11:30:30.080003 kubelet[2092]: I0715 11:30:30.079967 2092 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a942dc-88ea-4854-a722-52bcabc6d456" path="/var/lib/kubelet/pods/01a942dc-88ea-4854-a722-52bcabc6d456/volumes" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.128 [INFO][4088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.128 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" iface="eth0" netns="/var/run/netns/cni-aee894de-d28c-fdda-06ba-6eb88839cef7" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.128 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" iface="eth0" netns="/var/run/netns/cni-aee894de-d28c-fdda-06ba-6eb88839cef7" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.129 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" iface="eth0" netns="/var/run/netns/cni-aee894de-d28c-fdda-06ba-6eb88839cef7" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.129 [INFO][4088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.129 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.152 [INFO][4111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.152 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.152 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.158 [WARNING][4111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.158 [INFO][4111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.160 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:30.165668 env[1313]: 2025-07-15 11:30:30.161 [INFO][4088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:30.168508 env[1313]: time="2025-07-15T11:30:30.165811525Z" level=info msg="TearDown network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" successfully" Jul 15 11:30:30.168508 env[1313]: time="2025-07-15T11:30:30.165839537Z" level=info msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" returns successfully" Jul 15 11:30:30.168508 env[1313]: time="2025-07-15T11:30:30.166373297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-xsvx6,Uid:b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:30:30.168202 systemd[1]: run-netns-cni\x2daee894de\x2dd28c\x2dfdda\x2d06ba\x2d6eb88839cef7.mount: Deactivated successfully. Jul 15 11:30:30.181879 systemd[1]: run-containerd-runc-k8s.io-024c5805f6ca4db709dc9520f88da646aaf11b11913a43e40f08a301bc93ecce-runc.Ty7tM8.mount: Deactivated successfully. Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" iface="eth0" netns="/var/run/netns/cni-fc0244b8-b26a-0e1b-3472-dfaa08a830b7" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" iface="eth0" netns="/var/run/netns/cni-fc0244b8-b26a-0e1b-3472-dfaa08a830b7" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" iface="eth0" netns="/var/run/netns/cni-fc0244b8-b26a-0e1b-3472-dfaa08a830b7" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.132 [INFO][4089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.175 [INFO][4114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.175 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.175 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.181 [WARNING][4114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.181 [INFO][4114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.184 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:30.190193 env[1313]: 2025-07-15 11:30:30.186 [INFO][4089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:30.190966 env[1313]: time="2025-07-15T11:30:30.190930078Z" level=info msg="TearDown network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" successfully" Jul 15 11:30:30.191078 env[1313]: time="2025-07-15T11:30:30.191055777Z" level=info msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" returns successfully" Jul 15 11:30:30.191604 kubelet[2092]: E0715 11:30:30.191571 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:30.192343 env[1313]: time="2025-07-15T11:30:30.192305501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pbq8g,Uid:7edd760f-4b3e-4f59-9e90-ee9828b261c3,Namespace:kube-system,Attempt:1,}" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.129 [INFO][4087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.130 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" iface="eth0" netns="/var/run/netns/cni-ba316028-92cf-daee-ab95-ecf9de46150f" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.130 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" iface="eth0" netns="/var/run/netns/cni-ba316028-92cf-daee-ab95-ecf9de46150f" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.130 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" iface="eth0" netns="/var/run/netns/cni-ba316028-92cf-daee-ab95-ecf9de46150f" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.130 [INFO][4087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.130 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.177 [INFO][4109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.177 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.184 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.190 [WARNING][4109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.190 [INFO][4109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.191 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:30.197020 env[1313]: 2025-07-15 11:30:30.195 [INFO][4087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:30.197630 env[1313]: time="2025-07-15T11:30:30.197597994Z" level=info msg="TearDown network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" successfully" Jul 15 11:30:30.197765 env[1313]: time="2025-07-15T11:30:30.197743209Z" level=info msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" returns successfully" Jul 15 11:30:30.198534 kubelet[2092]: E0715 11:30:30.198104 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:30.199488 env[1313]: time="2025-07-15T11:30:30.199463955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tnjd,Uid:5acc111e-02a1-439a-93ba-39e1bce08fb2,Namespace:kube-system,Attempt:1,}" Jul 15 11:30:30.222041 systemd-networkd[1077]: cali67f8c405a2d: Gained IPv6LL Jul 15 11:30:30.348767 systemd-networkd[1077]: cali6f7d0e49b59: Gained IPv6LL Jul 15 11:30:30.409099 systemd[1]: run-netns-cni\x2dba316028\x2d92cf\x2ddaee\x2dab95\x2decf9de46150f.mount: Deactivated successfully. Jul 15 11:30:30.409219 systemd[1]: run-netns-cni\x2dfc0244b8\x2db26a\x2d0e1b\x2d3472\x2ddfaa08a830b7.mount: Deactivated successfully. Jul 15 11:30:30.796780 systemd-networkd[1077]: califdeddd48904: Gained IPv6LL Jul 15 11:30:30.833111 systemd-networkd[1077]: calicec8e462bfc: Link UP Jul 15 11:30:30.835773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:30:30.835890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicec8e462bfc: link becomes ready Jul 15 11:30:30.836032 systemd-networkd[1077]: calicec8e462bfc: Gained carrier Jul 15 11:30:30.860777 systemd-networkd[1077]: vxlan.calico: Gained IPv6LL Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.242 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0 calico-apiserver-77c5cfffc- calico-apiserver b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4 1025 0 2025-07-15 11:29:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c5cfffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c5cfffc-xsvx6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicec8e462bfc [] [] }} ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.242 [INFO][4147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.274 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" HandleID="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.275 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" HandleID="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c5cfffc-xsvx6", "timestamp":"2025-07-15 11:30:30.274896082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.275 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.275 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.275 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.280 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.283 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.517 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.742 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.745 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.745 [INFO][4200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.747 [INFO][4200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.760 [INFO][4200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.828 [INFO][4200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.828 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" host="localhost" Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.828 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:30.928484 env[1313]: 2025-07-15 11:30:30.828 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" HandleID="k8s-pod-network.d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.829 [INFO][4147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c5cfffc-xsvx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicec8e462bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.830 [INFO][4147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.830 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicec8e462bfc ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.840 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.840 [INFO][4147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e", Pod:"calico-apiserver-77c5cfffc-xsvx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicec8e462bfc", MAC:"4e:2e:fe:75:c0:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:30.929137 env[1313]: 2025-07-15 11:30:30.926 [INFO][4147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e" Namespace="calico-apiserver" Pod="calico-apiserver-77c5cfffc-xsvx6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:30.944935 env[1313]: time="2025-07-15T11:30:30.944867522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:30.945117 env[1313]: time="2025-07-15T11:30:30.945092648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:30.945221 env[1313]: time="2025-07-15T11:30:30.945198438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:30.945458 env[1313]: time="2025-07-15T11:30:30.945433793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e pid=4250 runtime=io.containerd.runc.v2 Jul 15 11:30:30.950000 audit[4244]: NETFILTER_CFG table=filter:105 family=2 entries=41 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:30.950000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=23060 a0=3 a1=7ffc8095f0e0 a2=0 a3=7ffc8095f0cc items=0 ppid=3869 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:30.950000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:30.953670 systemd-networkd[1077]: cali706eb2e81dd: Link UP Jul 15 11:30:30.958506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali706eb2e81dd: link becomes ready Jul 15 11:30:30.961326 systemd-networkd[1077]: cali706eb2e81dd: Gained carrier Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.262 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0 coredns-7c65d6cfc9- kube-system 7edd760f-4b3e-4f59-9e90-ee9828b261c3 1027 0 2025-07-15 11:29:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-pbq8g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali706eb2e81dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.262 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.291 [INFO][4212] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" HandleID="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.291 [INFO][4212] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" HandleID="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-pbq8g", "timestamp":"2025-07-15 11:30:30.291265623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.291 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.828 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.828 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.835 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.839 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.929 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.931 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.934 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.934 [INFO][4212] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.935 [INFO][4212] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.940 [INFO][4212] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.948 [INFO][4212] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.948 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" host="localhost" Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.948 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:30.979768 env[1313]: 2025-07-15 11:30:30.948 [INFO][4212] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" HandleID="k8s-pod-network.a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.950 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7edd760f-4b3e-4f59-9e90-ee9828b261c3", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-pbq8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706eb2e81dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.950 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.950 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali706eb2e81dd ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.963 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.964 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7edd760f-4b3e-4f59-9e90-ee9828b261c3", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c", Pod:"coredns-7c65d6cfc9-pbq8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706eb2e81dd", MAC:"ae:5d:05:16:cb:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:30.980347 env[1313]: 2025-07-15 11:30:30.977 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pbq8g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:30.986231 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:30.990666 env[1313]: time="2025-07-15T11:30:30.990587404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:30.990731 env[1313]: time="2025-07-15T11:30:30.990680519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:30.990731 env[1313]: time="2025-07-15T11:30:30.990704505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:30.990929 env[1313]: time="2025-07-15T11:30:30.990891860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c pid=4293 runtime=io.containerd.runc.v2 Jul 15 11:30:31.006000 audit[4310]: NETFILTER_CFG table=filter:106 family=2 entries=50 op=nft_register_chain pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:31.006000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7ffecacb4640 a2=0 a3=7ffecacb462c items=0 ppid=3869 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:31.006000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:31.023008 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:31.029176 env[1313]: time="2025-07-15T11:30:31.029140858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5cfffc-xsvx6,Uid:b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e\"" Jul 15 11:30:31.058677 systemd-networkd[1077]: calida3fda89946: Link UP Jul 15 11:30:31.061664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calida3fda89946: link becomes ready Jul 15 11:30:31.061891 systemd-networkd[1077]: calida3fda89946: Gained carrier Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.264 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0 coredns-7c65d6cfc9- kube-system 5acc111e-02a1-439a-93ba-39e1bce08fb2 1026 0 2025-07-15 11:29:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7tnjd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida3fda89946 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.264 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.292 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" HandleID="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.292 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" HandleID="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a57a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7tnjd", "timestamp":"2025-07-15 11:30:30.292063271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.292 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.948 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.948 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.966 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:30.970 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.030 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.034 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.036 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.036 [INFO][4211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.038 [INFO][4211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.045 [INFO][4211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.051 [INFO][4211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.051 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" host="localhost" Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.051 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:31.076614 env[1313]: 2025-07-15 11:30:31.051 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" HandleID="k8s-pod-network.ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.053 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5acc111e-02a1-439a-93ba-39e1bce08fb2", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7tnjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3fda89946", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.053 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.053 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida3fda89946 ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.062 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.062 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5acc111e-02a1-439a-93ba-39e1bce08fb2", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c", Pod:"coredns-7c65d6cfc9-7tnjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3fda89946", MAC:"0e:bb:04:00:37:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:31.077210 env[1313]: 2025-07-15 11:30:31.071 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tnjd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:31.077210 env[1313]: time="2025-07-15T11:30:31.076954367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pbq8g,Uid:7edd760f-4b3e-4f59-9e90-ee9828b261c3,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c\"" Jul 15 11:30:31.078004 kubelet[2092]: E0715 11:30:31.077581 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:31.079580 env[1313]: time="2025-07-15T11:30:31.079554505Z" level=info msg="CreateContainer within sandbox \"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:30:31.086000 audit[4343]: NETFILTER_CFG table=filter:107 family=2 entries=44 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:31.086000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffe69ca0880 a2=0 a3=7ffe69ca086c items=0 ppid=3869 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:31.086000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:31.102587 env[1313]: time="2025-07-15T11:30:31.102515781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:31.102763 env[1313]: time="2025-07-15T11:30:31.102592796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:31.102763 env[1313]: time="2025-07-15T11:30:31.102615830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:31.102829 env[1313]: time="2025-07-15T11:30:31.102778388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c pid=4353 runtime=io.containerd.runc.v2 Jul 15 11:30:31.123000 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:31.140837 env[1313]: time="2025-07-15T11:30:31.140769475Z" level=info msg="CreateContainer within sandbox \"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7830a475a8ef5edf7ba7b125880dcde9115c93c0ef019094bd0e033a8f4b971d\"" Jul 15 11:30:31.141406 env[1313]: time="2025-07-15T11:30:31.141365042Z" level=info msg="StartContainer for \"7830a475a8ef5edf7ba7b125880dcde9115c93c0ef019094bd0e033a8f4b971d\"" Jul 15 11:30:31.148259 env[1313]: time="2025-07-15T11:30:31.148238364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tnjd,Uid:5acc111e-02a1-439a-93ba-39e1bce08fb2,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c\"" Jul 15 11:30:31.148913 kubelet[2092]: E0715 11:30:31.148889 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:31.150559 env[1313]: time="2025-07-15T11:30:31.150539919Z" level=info msg="CreateContainer within sandbox \"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:30:31.167789 env[1313]: time="2025-07-15T11:30:31.167752850Z" level=info msg="CreateContainer within sandbox \"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4af7d662924cf2f9e0a0b3fb7721d2aa1b60bc0a6871387a6e69ce3a695f3237\"" Jul 15 11:30:31.168137 env[1313]: time="2025-07-15T11:30:31.168114865Z" level=info msg="StartContainer for \"4af7d662924cf2f9e0a0b3fb7721d2aa1b60bc0a6871387a6e69ce3a695f3237\"" Jul 15 11:30:31.188572 env[1313]: time="2025-07-15T11:30:31.188529363Z" level=info msg="StartContainer for \"7830a475a8ef5edf7ba7b125880dcde9115c93c0ef019094bd0e033a8f4b971d\" returns successfully" Jul 15 11:30:31.218186 env[1313]: time="2025-07-15T11:30:31.218151279Z" level=info msg="StartContainer for \"4af7d662924cf2f9e0a0b3fb7721d2aa1b60bc0a6871387a6e69ce3a695f3237\" returns successfully" Jul 15 11:30:31.352684 env[1313]: time="2025-07-15T11:30:31.352545692Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:31.354959 env[1313]: time="2025-07-15T11:30:31.354922728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:31.356250 env[1313]: time="2025-07-15T11:30:31.356215213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:31.357711 env[1313]: time="2025-07-15T11:30:31.357685505Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:31.358138 env[1313]: time="2025-07-15T11:30:31.358104498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 11:30:31.359112 env[1313]: time="2025-07-15T11:30:31.359080574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:30:31.360630 env[1313]: time="2025-07-15T11:30:31.360599467Z" level=info msg="CreateContainer within sandbox \"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 11:30:31.378886 env[1313]: time="2025-07-15T11:30:31.378832448Z" level=info msg="CreateContainer within sandbox \"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d7e56c3e01e7852e1636e041d0789cde1a5d9145b78d83def3e787cf5aa8cc9e\"" Jul 15 11:30:31.379085 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:41688.service. Jul 15 11:30:31.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.41:22-10.0.0.1:41688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:31.380447 env[1313]: time="2025-07-15T11:30:31.380417827Z" level=info msg="StartContainer for \"d7e56c3e01e7852e1636e041d0789cde1a5d9145b78d83def3e787cf5aa8cc9e\"" Jul 15 11:30:31.421000 audit[4466]: USER_ACCT pid=4466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.422266 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 41688 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:31.422000 audit[4466]: CRED_ACQ pid=4466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.422000 audit[4466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1194ca90 a2=3 a3=0 items=0 ppid=1 pid=4466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:31.422000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:31.423828 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:31.429267 systemd[1]: Started session-11.scope. Jul 15 11:30:31.429749 systemd-logind[1289]: New session 11 of user core. Jul 15 11:30:31.432831 env[1313]: time="2025-07-15T11:30:31.432800169Z" level=info msg="StartContainer for \"d7e56c3e01e7852e1636e041d0789cde1a5d9145b78d83def3e787cf5aa8cc9e\" returns successfully" Jul 15 11:30:31.436000 audit[4466]: USER_START pid=4466 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.437000 audit[4503]: CRED_ACQ pid=4503 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.547877 sshd[4466]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:31.548000 audit[4466]: USER_END pid=4466 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.548000 audit[4466]: CRED_DISP pid=4466 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.41:22-10.0.0.1:41692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:31.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.41:22-10.0.0.1:41688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:31.550462 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:41692.service. Jul 15 11:30:31.550921 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:41688.service: Deactivated successfully. Jul 15 11:30:31.551862 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:30:31.552426 systemd-logind[1289]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:30:31.553366 systemd-logind[1289]: Removed session 11. Jul 15 11:30:31.587000 audit[4514]: USER_ACCT pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.588044 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 41692 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:31.588000 audit[4514]: CRED_ACQ pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.588000 audit[4514]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedbbf18b0 a2=3 a3=0 items=0 ppid=1 pid=4514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:31.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:31.589135 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:31.592474 systemd-logind[1289]: New session 12 of user core. Jul 15 11:30:31.593184 systemd[1]: Started session-12.scope. Jul 15 11:30:31.596000 audit[4514]: USER_START pid=4514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.597000 audit[4518]: CRED_ACQ pid=4518 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.814467 sshd[4514]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:31.815000 audit[4514]: USER_END pid=4514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.815000 audit[4514]: CRED_DISP pid=4514 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.817102 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:41696.service. Jul 15 11:30:31.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.41:22-10.0.0.1:41696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:31.818147 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:41692.service: Deactivated successfully. Jul 15 11:30:31.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.41:22-10.0.0.1:41692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:31.818966 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:30:31.819079 systemd-logind[1289]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:30:31.820115 systemd-logind[1289]: Removed session 12. Jul 15 11:30:31.852000 audit[4525]: USER_ACCT pid=4525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.853859 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 41696 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:31.853000 audit[4525]: CRED_ACQ pid=4525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.853000 audit[4525]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8937da40 a2=3 a3=0 items=0 ppid=1 pid=4525 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:31.853000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:31.854797 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:31.858127 systemd-logind[1289]: New session 13 of user core. Jul 15 11:30:31.858839 systemd[1]: Started session-13.scope. Jul 15 11:30:31.863000 audit[4525]: USER_START pid=4525 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.864000 audit[4530]: CRED_ACQ pid=4530 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:31.948824 systemd-networkd[1077]: calicec8e462bfc: Gained IPv6LL Jul 15 11:30:32.174387 kubelet[2092]: E0715 11:30:32.174350 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:32.246421 kubelet[2092]: E0715 11:30:32.177681 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:32.434859 sshd[4525]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:32.435000 audit[4525]: USER_END pid=4525 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:32.435000 audit[4525]: CRED_DISP pid=4525 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:32.437253 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:41696.service: Deactivated successfully. Jul 15 11:30:32.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.41:22-10.0.0.1:41696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:32.438500 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:30:32.439062 systemd-logind[1289]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:30:32.440094 systemd-logind[1289]: Removed session 13. Jul 15 11:30:32.593206 kubelet[2092]: I0715 11:30:32.593139 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pbq8g" podStartSLOduration=49.59311579 podStartE2EDuration="49.59311579s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:30:32.578458878 +0000 UTC m=+54.606458122" watchObservedRunningTime="2025-07-15 11:30:32.59311579 +0000 UTC m=+54.621115024" Jul 15 11:30:32.605000 audit[4542]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:32.605000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb804dff0 a2=0 a3=7ffcb804dfdc items=0 ppid=2223 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:32.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:32.615000 audit[4542]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:32.615000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb804dff0 a2=0 a3=0 items=0 ppid=2223 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:32.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:32.628000 audit[4544]: NETFILTER_CFG table=filter:110 family=2 entries=17 op=nft_register_rule pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:32.628000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc96b645f0 a2=0 a3=7ffc96b645dc items=0 ppid=2223 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:32.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:32.637000 audit[4544]: NETFILTER_CFG table=nat:111 family=2 entries=47 op=nft_register_chain pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:32.637000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc96b645f0 a2=0 a3=7ffc96b645dc items=0 ppid=2223 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:32.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:32.716808 systemd-networkd[1077]: cali706eb2e81dd: Gained IPv6LL Jul 15 11:30:32.717149 systemd-networkd[1077]: calida3fda89946: Gained IPv6LL Jul 15 11:30:33.179988 kubelet[2092]: E0715 11:30:33.179959 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:33.180490 kubelet[2092]: E0715 11:30:33.180019 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:34.181000 kubelet[2092]: E0715 11:30:34.180977 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:34.181494 kubelet[2092]: E0715 11:30:34.181051 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:34.955517 env[1313]: time="2025-07-15T11:30:34.955467059Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:34.957291 env[1313]: time="2025-07-15T11:30:34.957258135Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:34.958895 env[1313]: time="2025-07-15T11:30:34.958860706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:34.960226 env[1313]: time="2025-07-15T11:30:34.960195249Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:34.960765 env[1313]: time="2025-07-15T11:30:34.960734799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 11:30:34.962241 env[1313]: time="2025-07-15T11:30:34.962189690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 11:30:34.962782 env[1313]: time="2025-07-15T11:30:34.962739390Z" level=info msg="CreateContainer within sandbox \"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:30:34.974786 env[1313]: time="2025-07-15T11:30:34.974725695Z" level=info msg="CreateContainer within sandbox \"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d5c0b64a9e526886e390b788087bdee6125f9d84bf042e5e8131e9b3254a7fb\"" Jul 15 11:30:34.975619 env[1313]: time="2025-07-15T11:30:34.975579619Z" level=info msg="StartContainer for \"5d5c0b64a9e526886e390b788087bdee6125f9d84bf042e5e8131e9b3254a7fb\"" Jul 15 11:30:35.031146 env[1313]: time="2025-07-15T11:30:35.031077484Z" level=info msg="StartContainer for \"5d5c0b64a9e526886e390b788087bdee6125f9d84bf042e5e8131e9b3254a7fb\" returns successfully" Jul 15 11:30:35.193339 kubelet[2092]: I0715 11:30:35.193274 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c5cfffc-tnzhf" podStartSLOduration=35.051332694 podStartE2EDuration="41.193260181s" podCreationTimestamp="2025-07-15 11:29:54 +0000 UTC" firstStartedPulling="2025-07-15 11:30:28.819745917 +0000 UTC m=+50.847745151" lastFinishedPulling="2025-07-15 11:30:34.961673394 +0000 UTC m=+56.989672638" observedRunningTime="2025-07-15 11:30:35.192940276 +0000 UTC m=+57.220939510" watchObservedRunningTime="2025-07-15 11:30:35.193260181 +0000 UTC m=+57.221259415" Jul 15 11:30:35.193900 kubelet[2092]: I0715 11:30:35.193421 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7tnjd" podStartSLOduration=52.19341807 podStartE2EDuration="52.19341807s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:30:32.609679208 +0000 UTC m=+54.637678442" watchObservedRunningTime="2025-07-15 11:30:35.19341807 +0000 UTC m=+57.221417304" Jul 15 11:30:35.204000 audit[4596]: NETFILTER_CFG table=filter:112 family=2 entries=14 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:35.206100 kernel: kauditd_printk_skb: 601 callbacks suppressed Jul 15 11:30:35.206146 kernel: audit: type=1325 audit(1752579035.204:450): table=filter:112 family=2 entries=14 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:35.204000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc79f36420 a2=0 a3=7ffc79f3640c items=0 ppid=2223 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:35.213099 kernel: audit: type=1300 audit(1752579035.204:450): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc79f36420 a2=0 a3=7ffc79f3640c items=0 ppid=2223 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:35.213147 kernel: audit: type=1327 audit(1752579035.204:450): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:35.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:35.218000 audit[4596]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:35.218000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc79f36420 a2=0 a3=7ffc79f3640c items=0 ppid=2223 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:35.226083 kernel: audit: type=1325 audit(1752579035.218:451): table=nat:113 family=2 entries=20 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:35.226163 kernel: audit: type=1300 audit(1752579035.218:451): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc79f36420 a2=0 a3=7ffc79f3640c items=0 ppid=2223 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:35.226198 kernel: audit: type=1327 audit(1752579035.218:451): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:35.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:36.185998 kubelet[2092]: I0715 11:30:36.185954 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:30:37.270131 env[1313]: time="2025-07-15T11:30:37.270077671Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:37.272055 env[1313]: time="2025-07-15T11:30:37.272033819Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:37.273701 env[1313]: time="2025-07-15T11:30:37.273677005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:37.275474 env[1313]: time="2025-07-15T11:30:37.275447512Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:37.276021 env[1313]: time="2025-07-15T11:30:37.275999225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 11:30:37.277009 env[1313]: time="2025-07-15T11:30:37.276973697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:30:37.277941 env[1313]: time="2025-07-15T11:30:37.277914205Z" level=info msg="CreateContainer within sandbox \"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 11:30:37.291794 env[1313]: time="2025-07-15T11:30:37.291748135Z" level=info msg="CreateContainer within sandbox \"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ba58b7b443cd57a9e056376de237f3cc4f7d49250612d8c2f759953387d50e42\"" Jul 15 11:30:37.292322 env[1313]: time="2025-07-15T11:30:37.292267847Z" level=info msg="StartContainer for \"ba58b7b443cd57a9e056376de237f3cc4f7d49250612d8c2f759953387d50e42\"" Jul 15 11:30:37.353579 env[1313]: time="2025-07-15T11:30:37.353527945Z" level=info msg="StartContainer for \"ba58b7b443cd57a9e056376de237f3cc4f7d49250612d8c2f759953387d50e42\" returns successfully" Jul 15 11:30:37.437928 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:41700.service. Jul 15 11:30:37.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.41:22-10.0.0.1:41700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:37.443691 kernel: audit: type=1130 audit(1752579037.437:452): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.41:22-10.0.0.1:41700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:37.475000 audit[4637]: USER_ACCT pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.476588 sshd[4637]: Accepted publickey for core from 10.0.0.1 port 41700 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:37.479000 audit[4637]: CRED_ACQ pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.480971 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:37.484577 kernel: audit: type=1101 audit(1752579037.475:453): pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.484682 kernel: audit: type=1103 audit(1752579037.479:454): pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.484729 kernel: audit: type=1006 audit(1752579037.479:455): pid=4637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 15 11:30:37.484888 systemd-logind[1289]: New session 14 of user core. Jul 15 11:30:37.485573 systemd[1]: Started session-14.scope. Jul 15 11:30:37.479000 audit[4637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeace9fd10 a2=3 a3=0 items=0 ppid=1 pid=4637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:37.479000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:37.489000 audit[4637]: USER_START pid=4637 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.490000 audit[4640]: CRED_ACQ pid=4640 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.616152 sshd[4637]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:37.616000 audit[4637]: USER_END pid=4637 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.616000 audit[4637]: CRED_DISP pid=4637 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:37.618436 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:41700.service: Deactivated successfully. Jul 15 11:30:37.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.41:22-10.0.0.1:41700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:37.619282 systemd-logind[1289]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:30:37.619345 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:30:37.620055 systemd-logind[1289]: Removed session 14. Jul 15 11:30:38.047015 env[1313]: time="2025-07-15T11:30:38.046976234Z" level=info msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" Jul 15 11:30:38.079507 env[1313]: time="2025-07-15T11:30:38.079460291Z" level=info msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\"" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.074 [WARNING][4661] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" WorkloadEndpoint="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.074 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.074 [INFO][4661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" iface="eth0" netns="" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.074 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.074 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.093 [INFO][4669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.093 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.093 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.191 [WARNING][4669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.191 [INFO][4669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.237 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:38.241443 env[1313]: 2025-07-15 11:30:38.239 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.241928 env[1313]: time="2025-07-15T11:30:38.241465385Z" level=info msg="TearDown network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" successfully" Jul 15 11:30:38.241928 env[1313]: time="2025-07-15T11:30:38.241495422Z" level=info msg="StopPodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" returns successfully" Jul 15 11:30:38.242632 env[1313]: time="2025-07-15T11:30:38.242601042Z" level=info msg="RemovePodSandbox for \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" Jul 15 11:30:38.242811 env[1313]: time="2025-07-15T11:30:38.242770132Z" level=info msg="Forcibly stopping sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\"" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.453 [INFO][4688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.453 [INFO][4688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" iface="eth0" netns="/var/run/netns/cni-790fe826-617e-ad50-57d1-6bfeec728aef" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.453 [INFO][4688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" iface="eth0" netns="/var/run/netns/cni-790fe826-617e-ad50-57d1-6bfeec728aef" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.454 [INFO][4688] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" iface="eth0" netns="/var/run/netns/cni-790fe826-617e-ad50-57d1-6bfeec728aef" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.454 [INFO][4688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.454 [INFO][4688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.481 [INFO][4717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" HandleID="k8s-pod-network.30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.481 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.481 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.485 [WARNING][4717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" HandleID="k8s-pod-network.30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.485 [INFO][4717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" HandleID="k8s-pod-network.30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.486 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:38.489762 env[1313]: 2025-07-15 11:30:38.488 [INFO][4688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba" Jul 15 11:30:38.490512 env[1313]: time="2025-07-15T11:30:38.490042527Z" level=info msg="TearDown network for sandbox \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\" successfully" Jul 15 11:30:38.490512 env[1313]: time="2025-07-15T11:30:38.490072575Z" level=info msg="StopPodSandbox for \"30cbcabc5402cdf10e288f21f4abad5af8539cff8765cd9545227bcfdf97d6ba\" returns successfully" Jul 15 11:30:38.492866 systemd[1]: run-netns-cni\x2d790fe826\x2d617e\x2dad50\x2d57d1\x2d6bfeec728aef.mount: Deactivated successfully. Jul 15 11:30:38.494809 env[1313]: time="2025-07-15T11:30:38.494769592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c7897fc4-w24xn,Uid:0745c5fc-ce0f-47aa-8707-bacfa72cacb9,Namespace:calico-system,Attempt:1,}" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.468 [WARNING][4710] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" WorkloadEndpoint="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.468 [INFO][4710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.468 [INFO][4710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" iface="eth0" netns="" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.468 [INFO][4710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.468 [INFO][4710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.494 [INFO][4726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.494 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.494 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.501 [WARNING][4726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.501 [INFO][4726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" HandleID="k8s-pod-network.9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Workload="localhost-k8s-whisker--55867d57ff--4mhmr-eth0" Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.502 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:38.506151 env[1313]: 2025-07-15 11:30:38.504 [INFO][4710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387" Jul 15 11:30:38.506661 env[1313]: time="2025-07-15T11:30:38.506161564Z" level=info msg="TearDown network for sandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" successfully" Jul 15 11:30:39.078898 env[1313]: time="2025-07-15T11:30:39.078676082Z" level=info msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\"" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.198 [INFO][4745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.199 [INFO][4745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" iface="eth0" netns="/var/run/netns/cni-cee96ba1-43b8-adfb-c00e-9177a379056f" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.199 [INFO][4745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" iface="eth0" netns="/var/run/netns/cni-cee96ba1-43b8-adfb-c00e-9177a379056f" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.199 [INFO][4745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" iface="eth0" netns="/var/run/netns/cni-cee96ba1-43b8-adfb-c00e-9177a379056f" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.199 [INFO][4745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.199 [INFO][4745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.217 [INFO][4754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" HandleID="k8s-pod-network.1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.217 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.217 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.222 [WARNING][4754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" HandleID="k8s-pod-network.1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.222 [INFO][4754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" HandleID="k8s-pod-network.1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.224 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.228124 env[1313]: 2025-07-15 11:30:39.226 [INFO][4745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3" Jul 15 11:30:39.228775 env[1313]: time="2025-07-15T11:30:39.228410366Z" level=info msg="TearDown network for sandbox \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\" successfully" Jul 15 11:30:39.228775 env[1313]: time="2025-07-15T11:30:39.228447577Z" level=info msg="StopPodSandbox for \"1f017dd727ab6ee52e78031d2951eb92c8ea2f577e8efc8dd26f15d7ba0ba6b3\" returns successfully" Jul 15 11:30:39.229236 env[1313]: time="2025-07-15T11:30:39.229183057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-phmvm,Uid:9b356024-f0d5-45bf-a4bc-f2e9fe1afa45,Namespace:calico-system,Attempt:1,}" Jul 15 11:30:39.233468 systemd[1]: run-netns-cni\x2dcee96ba1\x2d43b8\x2dadfb\x2dc00e\x2d9177a379056f.mount: Deactivated successfully. Jul 15 11:30:39.311924 env[1313]: time="2025-07-15T11:30:39.311824301Z" level=info msg="RemovePodSandbox \"9bf919a33538c0d12b09b211fc1cb073f35ff027799020f1b9dd2a446d376387\" returns successfully" Jul 15 11:30:39.312531 env[1313]: time="2025-07-15T11:30:39.312476233Z" level=info msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" Jul 15 11:30:39.351252 env[1313]: time="2025-07-15T11:30:39.351127740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:39.359581 env[1313]: time="2025-07-15T11:30:39.359519641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:39.361176 env[1313]: time="2025-07-15T11:30:39.361135134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:39.362996 env[1313]: time="2025-07-15T11:30:39.362964812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:39.363185 env[1313]: time="2025-07-15T11:30:39.363153889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 11:30:39.379859 env[1313]: time="2025-07-15T11:30:39.379803215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 11:30:39.381689 env[1313]: time="2025-07-15T11:30:39.381626280Z" level=info msg="CreateContainer within sandbox \"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.349 [WARNING][4771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea4b49fb-f94a-4309-9631-1c291cb3db4b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5", Pod:"csi-node-driver-96lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f7d0e49b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.349 [INFO][4771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.349 [INFO][4771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" iface="eth0" netns="" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.349 [INFO][4771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.349 [INFO][4771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.385 [INFO][4780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.385 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.385 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.391 [WARNING][4780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.391 [INFO][4780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.394 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.398065 env[1313]: 2025-07-15 11:30:39.396 [INFO][4771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.398590 env[1313]: time="2025-07-15T11:30:39.398091388Z" level=info msg="TearDown network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" successfully" Jul 15 11:30:39.398590 env[1313]: time="2025-07-15T11:30:39.398127095Z" level=info msg="StopPodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" returns successfully" Jul 15 11:30:39.398718 env[1313]: time="2025-07-15T11:30:39.398682195Z" level=info msg="RemovePodSandbox for \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" Jul 15 11:30:39.398795 env[1313]: time="2025-07-15T11:30:39.398722400Z" level=info msg="Forcibly stopping sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\"" Jul 15 11:30:39.410697 env[1313]: time="2025-07-15T11:30:39.410622220Z" level=info msg="CreateContainer within sandbox \"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e1a6de7cbce8b1b27990c5c934cc3180baaf1a1b07b7ab1ea55941bda290262\"" Jul 15 11:30:39.411698 env[1313]: time="2025-07-15T11:30:39.411663548Z" level=info msg="StartContainer for \"4e1a6de7cbce8b1b27990c5c934cc3180baaf1a1b07b7ab1ea55941bda290262\"" Jul 15 11:30:39.469714 systemd-networkd[1077]: cali706a8b711a1: Link UP Jul 15 11:30:39.473865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:30:39.473943 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali706a8b711a1: link becomes ready Jul 15 11:30:39.474136 systemd-networkd[1077]: cali706a8b711a1: Gained carrier Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.398 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0 calico-kube-controllers-78c7897fc4- calico-system 0745c5fc-ce0f-47aa-8707-bacfa72cacb9 1116 0 2025-07-15 11:29:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78c7897fc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78c7897fc4-w24xn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali706a8b711a1 [] [] }} ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.398 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.423 [INFO][4829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" HandleID="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.423 [INFO][4829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" HandleID="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000126ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78c7897fc4-w24xn", "timestamp":"2025-07-15 11:30:39.423187137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.423 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.424 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.424 [INFO][4829] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.431 [INFO][4829] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.435 [INFO][4829] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.443 [INFO][4829] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.445 [INFO][4829] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.447 [INFO][4829] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.447 [INFO][4829] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.449 [INFO][4829] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700 Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.453 [INFO][4829] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.459 [INFO][4829] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.459 [INFO][4829] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" host="localhost" Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.459 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.495304 env[1313]: 2025-07-15 11:30:39.460 [INFO][4829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" HandleID="k8s-pod-network.9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Workload="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.462 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0", GenerateName:"calico-kube-controllers-78c7897fc4-", Namespace:"calico-system", SelfLink:"", UID:"0745c5fc-ce0f-47aa-8707-bacfa72cacb9", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c7897fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78c7897fc4-w24xn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali706a8b711a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.462 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.462 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali706a8b711a1 ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.476 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.477 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0", GenerateName:"calico-kube-controllers-78c7897fc4-", Namespace:"calico-system", SelfLink:"", UID:"0745c5fc-ce0f-47aa-8707-bacfa72cacb9", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c7897fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700", Pod:"calico-kube-controllers-78c7897fc4-w24xn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali706a8b711a1", MAC:"56:dd:b6:6a:18:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.496411 env[1313]: 2025-07-15 11:30:39.488 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700" Namespace="calico-system" Pod="calico-kube-controllers-78c7897fc4-w24xn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c7897fc4--w24xn-eth0" Jul 15 11:30:39.520573 env[1313]: time="2025-07-15T11:30:39.520473319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:39.520573 env[1313]: time="2025-07-15T11:30:39.520523313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:39.520573 env[1313]: time="2025-07-15T11:30:39.520537981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:39.521052 env[1313]: time="2025-07-15T11:30:39.521017998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700 pid=4910 runtime=io.containerd.runc.v2 Jul 15 11:30:39.523518 env[1313]: time="2025-07-15T11:30:39.523478428Z" level=info msg="StartContainer for \"4e1a6de7cbce8b1b27990c5c934cc3180baaf1a1b07b7ab1ea55941bda290262\" returns successfully" Jul 15 11:30:39.523000 audit[4926]: NETFILTER_CFG table=filter:114 family=2 entries=58 op=nft_register_chain pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:39.523000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=27164 a0=3 a1=7ffca4290a00 a2=0 a3=7ffca42909ec items=0 ppid=3869 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:39.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:39.550755 systemd[1]: run-containerd-runc-k8s.io-9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700-runc.iBrjbv.mount: Deactivated successfully. Jul 15 11:30:39.566344 systemd-networkd[1077]: cali568b673fabc: Link UP Jul 15 11:30:39.568887 systemd-networkd[1077]: cali568b673fabc: Gained carrier Jul 15 11:30:39.570434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali568b673fabc: link becomes ready Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.466 [WARNING][4830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--96lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea4b49fb-f94a-4309-9631-1c291cb3db4b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5", Pod:"csi-node-driver-96lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f7d0e49b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.466 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.466 [INFO][4830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" iface="eth0" netns="" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.466 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.466 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.501 [INFO][4878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.501 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.558 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.569 [WARNING][4878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.569 [INFO][4878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" HandleID="k8s-pod-network.6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Workload="localhost-k8s-csi--node--driver--96lqs-eth0" Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.571 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.574925 env[1313]: 2025-07-15 11:30:39.573 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157" Jul 15 11:30:39.575515 env[1313]: time="2025-07-15T11:30:39.574951247Z" level=info msg="TearDown network for sandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" successfully" Jul 15 11:30:39.579441 env[1313]: time="2025-07-15T11:30:39.579401598Z" level=info msg="RemovePodSandbox \"6b9325f0e8bc601a64600dce01658d894e49c439bd84f5b809f2bc9e2d95f157\" returns successfully" Jul 15 11:30:39.580519 env[1313]: time="2025-07-15T11:30:39.580489945Z" level=info msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.396 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--phmvm-eth0 goldmane-58fd7646b9- calico-system 9b356024-f0d5-45bf-a4bc-f2e9fe1afa45 1120 0 2025-07-15 11:29:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-phmvm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali568b673fabc [] [] }} ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.396 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.457 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" HandleID="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.457 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" HandleID="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-phmvm", "timestamp":"2025-07-15 11:30:39.457071084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.457 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.477 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.477 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.531 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.536 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.540 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.541 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.543 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.543 [INFO][4842] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.544 [INFO][4842] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29 Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.548 [INFO][4842] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.558 [INFO][4842] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.558 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" host="localhost" Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.558 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.581879 env[1313]: 2025-07-15 11:30:39.558 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" HandleID="k8s-pod-network.d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Workload="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.560 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--phmvm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-phmvm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b673fabc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.561 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.561 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali568b673fabc ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.571 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.572 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--phmvm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9b356024-f0d5-45bf-a4bc-f2e9fe1afa45", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29", Pod:"goldmane-58fd7646b9-phmvm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b673fabc", MAC:"c6:d9:75:06:cd:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.582704 env[1313]: 2025-07-15 11:30:39.579 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29" Namespace="calico-system" Pod="goldmane-58fd7646b9-phmvm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--phmvm-eth0" Jul 15 11:30:39.588659 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:39.594938 env[1313]: time="2025-07-15T11:30:39.594854122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:39.594938 env[1313]: time="2025-07-15T11:30:39.594893096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:39.594938 env[1313]: time="2025-07-15T11:30:39.594902473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:39.595393 env[1313]: time="2025-07-15T11:30:39.595338286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29 pid=4978 runtime=io.containerd.runc.v2 Jul 15 11:30:39.646000 audit[5013]: NETFILTER_CFG table=filter:115 family=2 entries=60 op=nft_register_chain pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:30:39.646000 audit[5013]: SYSCALL arch=c000003e syscall=46 success=yes exit=29900 a0=3 a1=7ffc8a05e8c0 a2=0 a3=7ffc8a05e8ac items=0 ppid=3869 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:39.646000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:30:39.657261 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.632 [WARNING][4963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5acc111e-02a1-439a-93ba-39e1bce08fb2", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c", Pod:"coredns-7c65d6cfc9-7tnjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3fda89946", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.633 [INFO][4963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.633 [INFO][4963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" iface="eth0" netns="" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.633 [INFO][4963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.633 [INFO][4963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.663 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.663 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.663 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.670 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.670 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.672 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.676314 env[1313]: 2025-07-15 11:30:39.674 [INFO][4963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.676774 env[1313]: time="2025-07-15T11:30:39.676346195Z" level=info msg="TearDown network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" successfully" Jul 15 11:30:39.676774 env[1313]: time="2025-07-15T11:30:39.676383264Z" level=info msg="StopPodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" returns successfully" Jul 15 11:30:39.676881 env[1313]: time="2025-07-15T11:30:39.676844656Z" level=info msg="RemovePodSandbox for \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" Jul 15 11:30:39.676926 env[1313]: time="2025-07-15T11:30:39.676883280Z" level=info msg="Forcibly stopping sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\"" Jul 15 11:30:39.685047 env[1313]: time="2025-07-15T11:30:39.684994329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c7897fc4-w24xn,Uid:0745c5fc-ce0f-47aa-8707-bacfa72cacb9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700\"" Jul 15 11:30:39.694855 env[1313]: time="2025-07-15T11:30:39.694806803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-phmvm,Uid:9b356024-f0d5-45bf-a4bc-f2e9fe1afa45,Namespace:calico-system,Attempt:1,} returns sandbox id \"d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29\"" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.743 [WARNING][5047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5acc111e-02a1-439a-93ba-39e1bce08fb2", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad696475b55cef8162491ff6e89a67ad7cd3b39ec4e67d1d930b89cb455b717c", Pod:"coredns-7c65d6cfc9-7tnjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3fda89946", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.743 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.743 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" iface="eth0" netns="" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.743 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.743 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.772 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.772 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.772 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.779 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.779 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" HandleID="k8s-pod-network.01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Workload="localhost-k8s-coredns--7c65d6cfc9--7tnjd-eth0" Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.780 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:39.784378 env[1313]: 2025-07-15 11:30:39.782 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57" Jul 15 11:30:39.784862 env[1313]: time="2025-07-15T11:30:39.784406669Z" level=info msg="TearDown network for sandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" successfully" Jul 15 11:30:39.922938 env[1313]: time="2025-07-15T11:30:39.922820779Z" level=info msg="RemovePodSandbox \"01c8dcaefc85495214bbb662f6fe881de5de4ce6c736764b47a0567a036cce57\" returns successfully" Jul 15 11:30:39.923467 env[1313]: time="2025-07-15T11:30:39.923436052Z" level=info msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.084 [WARNING][5082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7edd760f-4b3e-4f59-9e90-ee9828b261c3", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c", Pod:"coredns-7c65d6cfc9-pbq8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706eb2e81dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.084 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.084 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" iface="eth0" netns="" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.084 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.084 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.110 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.110 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.110 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.168 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.168 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.170 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.176334 env[1313]: 2025-07-15 11:30:40.174 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.183020 env[1313]: time="2025-07-15T11:30:40.176753840Z" level=info msg="TearDown network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" successfully" Jul 15 11:30:40.183020 env[1313]: time="2025-07-15T11:30:40.176782525Z" level=info msg="StopPodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" returns successfully" Jul 15 11:30:40.183020 env[1313]: time="2025-07-15T11:30:40.177151001Z" level=info msg="RemovePodSandbox for \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" Jul 15 11:30:40.183020 env[1313]: time="2025-07-15T11:30:40.177173173Z" level=info msg="Forcibly stopping sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\"" Jul 15 11:30:40.258696 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 15 11:30:40.258815 kernel: audit: type=1325 audit(1752579040.256:463): table=filter:116 family=2 entries=14 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:40.256000 audit[5117]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:40.256000 audit[5117]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcada917e0 a2=0 a3=7ffcada917cc items=0 ppid=2223 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:40.265078 kernel: audit: type=1300 audit(1752579040.256:463): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcada917e0 a2=0 a3=7ffcada917cc items=0 ppid=2223 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:40.265138 kernel: audit: type=1327 audit(1752579040.256:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:40.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:40.268000 audit[5117]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:40.268000 audit[5117]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcada917e0 a2=0 a3=7ffcada917cc items=0 ppid=2223 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:40.276627 kernel: audit: type=1325 audit(1752579040.268:464): table=nat:117 family=2 entries=20 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:40.276694 kernel: audit: type=1300 audit(1752579040.268:464): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcada917e0 a2=0 a3=7ffcada917cc items=0 ppid=2223 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:40.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:40.279662 kernel: audit: type=1327 audit(1752579040.268:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.270 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7edd760f-4b3e-4f59-9e90-ee9828b261c3", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a125d0180ab2744ecd4c289a307910cb3f3a54c818a13d2618d0a2bdfb289c", Pod:"coredns-7c65d6cfc9-pbq8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706eb2e81dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.270 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.270 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" iface="eth0" netns="" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.270 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.270 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.290 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.290 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.290 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.296 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.296 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" HandleID="k8s-pod-network.92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Workload="localhost-k8s-coredns--7c65d6cfc9--pbq8g-eth0" Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.298 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.302115 env[1313]: 2025-07-15 11:30:40.300 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961" Jul 15 11:30:40.302563 env[1313]: time="2025-07-15T11:30:40.302128217Z" level=info msg="TearDown network for sandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" successfully" Jul 15 11:30:40.306658 env[1313]: time="2025-07-15T11:30:40.306621297Z" level=info msg="RemovePodSandbox \"92f2662997900fdd007704b35886d9f601eaa1bc29c6a48083ab8449c60c2961\" returns successfully" Jul 15 11:30:40.307347 env[1313]: time="2025-07-15T11:30:40.307296783Z" level=info msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.346 [WARNING][5137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e", Pod:"calico-apiserver-77c5cfffc-xsvx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicec8e462bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.349 [INFO][5137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.349 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" iface="eth0" netns="" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.349 [INFO][5137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.349 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.382 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.382 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.382 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.388 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.388 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.389 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.393230 env[1313]: 2025-07-15 11:30:40.391 [INFO][5137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.393710 env[1313]: time="2025-07-15T11:30:40.393257387Z" level=info msg="TearDown network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" successfully" Jul 15 11:30:40.393710 env[1313]: time="2025-07-15T11:30:40.393296801Z" level=info msg="StopPodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" returns successfully" Jul 15 11:30:40.393789 env[1313]: time="2025-07-15T11:30:40.393767431Z" level=info msg="RemovePodSandbox for \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" Jul 15 11:30:40.393843 env[1313]: time="2025-07-15T11:30:40.393791697Z" level=info msg="Forcibly stopping sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\"" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.424 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b4d93f-f5d8-44d2-bb32-ff5dd044d8c4", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3fefe525af5236b7474225cb6092b0a4729bde877cdd01a9bd7d53e0403895e", Pod:"calico-apiserver-77c5cfffc-xsvx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicec8e462bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.424 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.424 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" iface="eth0" netns="" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.424 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.424 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.443 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.443 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.443 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.449 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.449 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" HandleID="k8s-pod-network.15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Workload="localhost-k8s-calico--apiserver--77c5cfffc--xsvx6-eth0" Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.450 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.454857 env[1313]: 2025-07-15 11:30:40.452 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6" Jul 15 11:30:40.454857 env[1313]: time="2025-07-15T11:30:40.454798634Z" level=info msg="TearDown network for sandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" successfully" Jul 15 11:30:40.459341 env[1313]: time="2025-07-15T11:30:40.459309148Z" level=info msg="RemovePodSandbox \"15fdcdd4bded17d5c47a9a3bf07c2a1cfb5b4797e98d592e56df137e50e5cfc6\" returns successfully" Jul 15 11:30:40.459818 env[1313]: time="2025-07-15T11:30:40.459787833Z" level=info msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" Jul 15 11:30:40.525145 systemd-networkd[1077]: cali706a8b711a1: Gained IPv6LL Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.497 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1dfeee-bd95-4e9e-b226-86afd94e0964", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880", Pod:"calico-apiserver-77c5cfffc-tnzhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdeddd48904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.500 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.500 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" iface="eth0" netns="" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.500 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.500 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.527 [INFO][5198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.527 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.527 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.534 [WARNING][5198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.534 [INFO][5198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.536 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.540945 env[1313]: 2025-07-15 11:30:40.539 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.541582 env[1313]: time="2025-07-15T11:30:40.540983512Z" level=info msg="TearDown network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" successfully" Jul 15 11:30:40.541582 env[1313]: time="2025-07-15T11:30:40.541012196Z" level=info msg="StopPodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" returns successfully" Jul 15 11:30:40.542192 env[1313]: time="2025-07-15T11:30:40.542163783Z" level=info msg="RemovePodSandbox for \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" Jul 15 11:30:40.542326 env[1313]: time="2025-07-15T11:30:40.542279861Z" level=info msg="Forcibly stopping sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\"" Jul 15 11:30:40.589809 systemd-networkd[1077]: cali568b673fabc: Gained IPv6LL Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.572 [WARNING][5215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0", GenerateName:"calico-apiserver-77c5cfffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1dfeee-bd95-4e9e-b226-86afd94e0964", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5cfffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99b190fc6b2b4bdfec4e704585fdd9e0e24398b82d648867de365f5860a19880", Pod:"calico-apiserver-77c5cfffc-tnzhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdeddd48904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.573 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.573 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" iface="eth0" netns="" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.573 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.573 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.599 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.599 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.599 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.934 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.935 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" HandleID="k8s-pod-network.b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Workload="localhost-k8s-calico--apiserver--77c5cfffc--tnzhf-eth0" Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.936 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:30:40.941152 env[1313]: 2025-07-15 11:30:40.939 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200" Jul 15 11:30:40.941910 env[1313]: time="2025-07-15T11:30:40.941818487Z" level=info msg="TearDown network for sandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" successfully" Jul 15 11:30:40.951693 env[1313]: time="2025-07-15T11:30:40.951627383Z" level=info msg="RemovePodSandbox \"b6c1d68cc0fa946068c934e698db2d39a59d09146e5fc6647551f8e4cb157200\" returns successfully" Jul 15 11:30:41.237814 kubelet[2092]: I0715 11:30:41.237698 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:30:41.795390 env[1313]: time="2025-07-15T11:30:41.795325054Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:41.797525 env[1313]: time="2025-07-15T11:30:41.797495044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:41.799046 env[1313]: time="2025-07-15T11:30:41.799011499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:41.800420 env[1313]: time="2025-07-15T11:30:41.800386428Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:41.800829 env[1313]: time="2025-07-15T11:30:41.800804668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 11:30:41.801716 env[1313]: time="2025-07-15T11:30:41.801688238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 11:30:41.802609 env[1313]: time="2025-07-15T11:30:41.802566076Z" level=info msg="CreateContainer within sandbox \"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 11:30:41.814027 env[1313]: time="2025-07-15T11:30:41.813989321Z" level=info msg="CreateContainer within sandbox \"cc72b8107cafa5c03b72af9525d23f4c7c18ba038418623dab88a88a4bdedfe5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"35063dac8532f206ac165bf76ae50ba9a94d189e23d2f10a7774ba9e49505045\"" Jul 15 11:30:41.814790 env[1313]: time="2025-07-15T11:30:41.814760128Z" level=info msg="StartContainer for \"35063dac8532f206ac165bf76ae50ba9a94d189e23d2f10a7774ba9e49505045\"" Jul 15 11:30:41.856865 env[1313]: time="2025-07-15T11:30:41.856829933Z" level=info msg="StartContainer for \"35063dac8532f206ac165bf76ae50ba9a94d189e23d2f10a7774ba9e49505045\" returns successfully" Jul 15 11:30:42.146311 kubelet[2092]: I0715 11:30:42.146206 2092 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 11:30:42.146311 kubelet[2092]: I0715 11:30:42.146234 2092 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 11:30:42.251171 kubelet[2092]: I0715 11:30:42.251071 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c5cfffc-xsvx6" podStartSLOduration=39.903591682 podStartE2EDuration="48.251050041s" podCreationTimestamp="2025-07-15 11:29:54 +0000 UTC" firstStartedPulling="2025-07-15 11:30:31.030485041 +0000 UTC m=+53.058484265" lastFinishedPulling="2025-07-15 11:30:39.37794339 +0000 UTC m=+61.405942624" observedRunningTime="2025-07-15 11:30:40.240885183 +0000 UTC m=+62.268884417" watchObservedRunningTime="2025-07-15 11:30:42.251050041 +0000 UTC m=+64.279049275" Jul 15 11:30:42.619810 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:47022.service. Jul 15 11:30:42.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.41:22-10.0.0.1:47022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:42.625671 kernel: audit: type=1130 audit(1752579042.619:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.41:22-10.0.0.1:47022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:42.660000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.661252 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 47022 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:42.663193 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:42.662000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.666486 systemd-logind[1289]: New session 15 of user core. Jul 15 11:30:42.667272 systemd[1]: Started session-15.scope. Jul 15 11:30:42.669669 kernel: audit: type=1101 audit(1752579042.660:466): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.669722 kernel: audit: type=1103 audit(1752579042.662:467): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.669749 kernel: audit: type=1006 audit(1752579042.662:468): pid=5266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 15 11:30:42.662000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0af455a0 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:42.662000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:42.671000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.672000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.826330 sshd[5266]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:42.826000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.826000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:42.828949 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:47022.service: Deactivated successfully. Jul 15 11:30:42.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.41:22-10.0.0.1:47022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:42.829955 systemd-logind[1289]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:30:42.830074 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:30:42.830923 systemd-logind[1289]: Removed session 15. Jul 15 11:30:44.127514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056319913.mount: Deactivated successfully. Jul 15 11:30:44.829709 env[1313]: time="2025-07-15T11:30:44.829661099Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:44.844396 env[1313]: time="2025-07-15T11:30:44.844356941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:44.979513 env[1313]: time="2025-07-15T11:30:44.979468622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:45.038161 env[1313]: time="2025-07-15T11:30:45.038102638Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:45.038799 env[1313]: time="2025-07-15T11:30:45.038770279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 11:30:45.039831 env[1313]: time="2025-07-15T11:30:45.039808130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 11:30:45.040703 env[1313]: time="2025-07-15T11:30:45.040675929Z" level=info msg="CreateContainer within sandbox \"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 11:30:45.078543 kubelet[2092]: E0715 11:30:45.078508 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:45.266837 env[1313]: time="2025-07-15T11:30:45.266767790Z" level=info msg="CreateContainer within sandbox \"57db52d66d78df3af0067c86e7e0e63efb4829f69071598807acded9f2e34fbd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fbbd82aa497e0d7ee096f9259c582316f5bd5b1c7a2a1a55a9c3f7616ad1fb70\"" Jul 15 11:30:45.267253 env[1313]: time="2025-07-15T11:30:45.267230344Z" level=info msg="StartContainer for \"fbbd82aa497e0d7ee096f9259c582316f5bd5b1c7a2a1a55a9c3f7616ad1fb70\"" Jul 15 11:30:45.336481 env[1313]: time="2025-07-15T11:30:45.336425207Z" level=info msg="StartContainer for \"fbbd82aa497e0d7ee096f9259c582316f5bd5b1c7a2a1a55a9c3f7616ad1fb70\" returns successfully" Jul 15 11:30:46.306178 kubelet[2092]: I0715 11:30:46.306080 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79d6476997-mkkkn" podStartSLOduration=2.296515845 podStartE2EDuration="18.306051156s" podCreationTimestamp="2025-07-15 11:30:28 +0000 UTC" firstStartedPulling="2025-07-15 11:30:29.030117185 +0000 UTC m=+51.058116419" lastFinishedPulling="2025-07-15 11:30:45.039652476 +0000 UTC m=+67.067651730" observedRunningTime="2025-07-15 11:30:46.303294168 +0000 UTC m=+68.331293402" watchObservedRunningTime="2025-07-15 11:30:46.306051156 +0000 UTC m=+68.334050390" Jul 15 11:30:46.306657 kubelet[2092]: I0715 11:30:46.306269 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-96lqs" podStartSLOduration=36.266201326 podStartE2EDuration="49.306261233s" podCreationTimestamp="2025-07-15 11:29:57 +0000 UTC" firstStartedPulling="2025-07-15 11:30:28.761395541 +0000 UTC m=+50.789394775" lastFinishedPulling="2025-07-15 11:30:41.801455447 +0000 UTC m=+63.829454682" observedRunningTime="2025-07-15 11:30:42.252332064 +0000 UTC m=+64.280331298" watchObservedRunningTime="2025-07-15 11:30:46.306261233 +0000 UTC m=+68.334260467" Jul 15 11:30:46.338000 audit[5344]: NETFILTER_CFG table=filter:118 family=2 entries=13 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:46.344050 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:30:46.344141 kernel: audit: type=1325 audit(1752579046.338:474): table=filter:118 family=2 entries=13 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:46.338000 audit[5344]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffee42e8300 a2=0 a3=7ffee42e82ec items=0 ppid=2223 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:46.351609 kernel: audit: type=1300 audit(1752579046.338:474): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffee42e8300 a2=0 a3=7ffee42e82ec items=0 ppid=2223 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:46.351746 kernel: audit: type=1327 audit(1752579046.338:474): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:46.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:46.351000 audit[5344]: NETFILTER_CFG table=nat:119 family=2 entries=27 op=nft_register_chain pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:46.356385 kernel: audit: type=1325 audit(1752579046.351:475): table=nat:119 family=2 entries=27 op=nft_register_chain pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:46.356438 kernel: audit: type=1300 audit(1752579046.351:475): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffee42e8300 a2=0 a3=7ffee42e82ec items=0 ppid=2223 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:46.351000 audit[5344]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffee42e8300 a2=0 a3=7ffee42e82ec items=0 ppid=2223 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:46.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:46.363165 kernel: audit: type=1327 audit(1752579046.351:475): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:47.830103 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:47030.service. Jul 15 11:30:47.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:47.872212 kernel: audit: type=1130 audit(1752579047.829:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:47.912000 audit[5345]: USER_ACCT pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:47.913073 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 47030 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:47.916000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:47.917966 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:47.922333 kernel: audit: type=1101 audit(1752579047.912:477): pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:47.922389 kernel: audit: type=1103 audit(1752579047.916:478): pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:47.922411 kernel: audit: type=1006 audit(1752579047.916:479): pid=5345 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 15 11:30:47.922013 systemd-logind[1289]: New session 16 of user core. Jul 15 11:30:47.922338 systemd[1]: Started session-16.scope. Jul 15 11:30:47.916000 audit[5345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3bfced60 a2=3 a3=0 items=0 ppid=1 pid=5345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:47.916000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:47.926000 audit[5345]: USER_START pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:47.927000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:48.166701 sshd[5345]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:48.167000 audit[5345]: USER_END pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:48.167000 audit[5345]: CRED_DISP pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:48.169330 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:47030.service: Deactivated successfully. Jul 15 11:30:48.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:48.170588 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:30:48.171060 systemd-logind[1289]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:30:48.171999 systemd-logind[1289]: Removed session 16. Jul 15 11:30:49.445194 env[1313]: time="2025-07-15T11:30:49.445135790Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:49.448884 env[1313]: time="2025-07-15T11:30:49.448843028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:49.451373 env[1313]: time="2025-07-15T11:30:49.451340374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:49.454052 env[1313]: time="2025-07-15T11:30:49.454011754Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:49.454482 env[1313]: time="2025-07-15T11:30:49.454454368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 11:30:49.455503 env[1313]: time="2025-07-15T11:30:49.455468303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 11:30:49.467905 env[1313]: time="2025-07-15T11:30:49.467124379Z" level=info msg="CreateContainer within sandbox \"9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 11:30:49.487550 env[1313]: time="2025-07-15T11:30:49.487507251Z" level=info msg="CreateContainer within sandbox \"9546bb3866e0f494dba1df86f2a10cb169a9fcafcba8532dc161ab2e6fb6b700\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"44204dc857594cf6f6f0079898798f360380136bfa0cb56ea9667df92652d976\"" Jul 15 11:30:49.488165 env[1313]: time="2025-07-15T11:30:49.488101056Z" level=info msg="StartContainer for \"44204dc857594cf6f6f0079898798f360380136bfa0cb56ea9667df92652d976\"" Jul 15 11:30:49.552669 env[1313]: time="2025-07-15T11:30:49.552618810Z" level=info msg="StartContainer for \"44204dc857594cf6f6f0079898798f360380136bfa0cb56ea9667df92652d976\" returns successfully" Jul 15 11:30:50.284431 kubelet[2092]: I0715 11:30:50.284359 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78c7897fc4-w24xn" podStartSLOduration=43.515253895 podStartE2EDuration="53.284341742s" podCreationTimestamp="2025-07-15 11:29:57 +0000 UTC" firstStartedPulling="2025-07-15 11:30:39.686148159 +0000 UTC m=+61.714147393" lastFinishedPulling="2025-07-15 11:30:49.455236006 +0000 UTC m=+71.483235240" observedRunningTime="2025-07-15 11:30:50.283997018 +0000 UTC m=+72.311996252" watchObservedRunningTime="2025-07-15 11:30:50.284341742 +0000 UTC m=+72.312340976" Jul 15 11:30:53.047385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013588147.mount: Deactivated successfully. Jul 15 11:30:53.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:42840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:53.169725 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:42840.service. Jul 15 11:30:53.171141 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:30:53.171273 kernel: audit: type=1130 audit(1752579053.168:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:42840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:53.209000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.244225 kernel: audit: type=1101 audit(1752579053.209:486): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.244281 kernel: audit: type=1103 audit(1752579053.211:487): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.244305 kernel: audit: type=1006 audit(1752579053.211:488): pid=5433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 15 11:30:53.245624 kernel: audit: type=1300 audit(1752579053.211:488): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98c4ae50 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:53.245664 kernel: audit: type=1327 audit(1752579053.211:488): proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:53.245682 kernel: audit: type=1105 audit(1752579053.221:489): pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.245700 kernel: audit: type=1103 audit(1752579053.223:490): pid=5436 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.211000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.211000 audit[5433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98c4ae50 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:53.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:53.221000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.223000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.216949 systemd-logind[1289]: New session 17 of user core. Jul 15 11:30:53.213260 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:53.246324 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 42840 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:53.217924 systemd[1]: Started session-17.scope. Jul 15 11:30:53.697650 sshd[5433]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:53.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:53.700149 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:42848.service. Jul 15 11:30:53.707888 kernel: audit: type=1130 audit(1752579053.698:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:53.706000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.709424 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:42840.service: Deactivated successfully. Jul 15 11:30:53.712735 kernel: audit: type=1106 audit(1752579053.706:492): pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.710134 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:30:53.713490 systemd-logind[1289]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:30:53.714410 systemd-logind[1289]: Removed session 17. Jul 15 11:30:53.706000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:42840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:53.740000 audit[5445]: USER_ACCT pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.742790 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 42848 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:53.742000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.742000 audit[5445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3303dc60 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:53.742000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:53.753000 audit[5445]: USER_START pid=5445 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.754000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:53.748654 systemd-logind[1289]: New session 18 of user core. Jul 15 11:30:53.744674 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:53.749325 systemd[1]: Started session-18.scope. Jul 15 11:30:54.120954 sshd[5445]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:54.123015 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:42858.service. Jul 15 11:30:54.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.41:22-10.0.0.1:42858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:54.122000 audit[5445]: USER_END pid=5445 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.122000 audit[5445]: CRED_DISP pid=5445 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.127814 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:42848.service: Deactivated successfully. Jul 15 11:30:54.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:54.128535 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:30:54.128986 systemd-logind[1289]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:30:54.129679 systemd-logind[1289]: Removed session 18. Jul 15 11:30:54.163000 audit[5459]: USER_ACCT pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.165024 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 42858 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:54.164000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.164000 audit[5459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff88375a50 a2=3 a3=0 items=0 ppid=1 pid=5459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:54.164000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:54.172000 audit[5459]: USER_START pid=5459 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.173000 audit[5464]: CRED_ACQ pid=5464 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:54.169499 systemd-logind[1289]: New session 19 of user core. Jul 15 11:30:54.166214 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:54.170161 systemd[1]: Started session-19.scope. Jul 15 11:30:55.702198 env[1313]: time="2025-07-15T11:30:55.702129532Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:55.706922 env[1313]: time="2025-07-15T11:30:55.706826648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:55.708650 env[1313]: time="2025-07-15T11:30:55.708586651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:55.710400 env[1313]: time="2025-07-15T11:30:55.710351473Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:30:55.710997 env[1313]: time="2025-07-15T11:30:55.710958460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 11:30:55.716632 env[1313]: time="2025-07-15T11:30:55.716595244Z" level=info msg="CreateContainer within sandbox \"d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 11:30:55.732808 env[1313]: time="2025-07-15T11:30:55.732752507Z" level=info msg="CreateContainer within sandbox \"d82a997905d2b0ec7a6557cb9a59f06c0988909cd28c6fcb1697742b9f3ebf29\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d75dcc11552369c638d54e5f6e744db60a4ed7a5e7303f98b7128f74f5eb2298\"" Jul 15 11:30:55.734088 env[1313]: time="2025-07-15T11:30:55.733208874Z" level=info msg="StartContainer for \"d75dcc11552369c638d54e5f6e744db60a4ed7a5e7303f98b7128f74f5eb2298\"" Jul 15 11:30:55.799234 env[1313]: time="2025-07-15T11:30:55.799184455Z" level=info msg="StartContainer for \"d75dcc11552369c638d54e5f6e744db60a4ed7a5e7303f98b7128f74f5eb2298\" returns successfully" Jul 15 11:30:56.587000 audit[5513]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:56.587000 audit[5513]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffca18d6a10 a2=0 a3=7ffca18d69fc items=0 ppid=2223 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:56.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:56.592000 audit[5513]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:56.592000 audit[5513]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffca18d6a10 a2=0 a3=7ffca18d69fc items=0 ppid=2223 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:56.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:57.384000 audit[5516]: NETFILTER_CFG table=filter:122 family=2 entries=24 op=nft_register_rule pid=5516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:57.384000 audit[5516]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7fff921d8d50 a2=0 a3=7fff921d8d3c items=0 ppid=2223 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:57.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:57.393000 audit[5516]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:57.393000 audit[5516]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff921d8d50 a2=0 a3=0 items=0 ppid=2223 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:57.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:57.406787 sshd[5459]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:57.406000 audit[5459]: USER_END pid=5459 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.406000 audit[5459]: CRED_DISP pid=5459 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.409737 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:42860.service. Jul 15 11:30:57.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.41:22-10.0.0.1:42860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:57.410560 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:42858.service: Deactivated successfully. Jul 15 11:30:57.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.41:22-10.0.0.1:42858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:57.412051 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:30:57.412539 systemd-logind[1289]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:30:57.413602 systemd-logind[1289]: Removed session 19. Jul 15 11:30:57.450000 audit[5517]: USER_ACCT pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.452726 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 42860 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:57.451000 audit[5517]: CRED_ACQ pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.452000 audit[5517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd17d5cf10 a2=3 a3=0 items=0 ppid=1 pid=5517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:57.452000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:57.454262 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:57.458707 systemd-logind[1289]: New session 20 of user core. Jul 15 11:30:57.459387 systemd[1]: Started session-20.scope. Jul 15 11:30:57.466000 audit[5517]: USER_START pid=5517 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.467000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:57.571802 systemd[1]: run-containerd-runc-k8s.io-d75dcc11552369c638d54e5f6e744db60a4ed7a5e7303f98b7128f74f5eb2298-runc.UfslKc.mount: Deactivated successfully. Jul 15 11:30:58.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.41:22-10.0.0.1:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:58.072170 sshd[5517]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:58.071560 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:42864.service. Jul 15 11:30:58.077000 audit[5517]: USER_END pid=5517 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.077000 audit[5517]: CRED_DISP pid=5517 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.41:22-10.0.0.1:42860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:58.079813 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:42860.service: Deactivated successfully. Jul 15 11:30:58.080647 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:30:58.082829 systemd-logind[1289]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:30:58.084942 systemd-logind[1289]: Removed session 20. Jul 15 11:30:58.106941 kubelet[2092]: E0715 11:30:58.106098 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:58.114000 audit[5553]: USER_ACCT pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.115096 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 42864 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:30:58.115000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.115000 audit[5553]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1c63fee0 a2=3 a3=0 items=0 ppid=1 pid=5553 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:58.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:30:58.116169 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:58.122176 systemd[1]: Started session-21.scope. Jul 15 11:30:58.122490 systemd-logind[1289]: New session 21 of user core. Jul 15 11:30:58.126000 audit[5553]: USER_START pid=5553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.127000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.253685 sshd[5553]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:58.257681 kernel: kauditd_printk_skb: 54 callbacks suppressed Jul 15 11:30:58.258598 kernel: audit: type=1106 audit(1752579058.254:531): pid=5553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.254000 audit[5553]: USER_END pid=5553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.256897 systemd-logind[1289]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:30:58.257744 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:42864.service: Deactivated successfully. Jul 15 11:30:58.258756 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:30:58.259996 systemd-logind[1289]: Removed session 21. Jul 15 11:30:58.254000 audit[5553]: CRED_DISP pid=5553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.268744 kernel: audit: type=1104 audit(1752579058.254:532): pid=5553 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:30:58.268856 kernel: audit: type=1131 audit(1752579058.257:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.41:22-10.0.0.1:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:58.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.41:22-10.0.0.1:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:30:58.412000 audit[5571]: NETFILTER_CFG table=filter:124 family=2 entries=36 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:58.412000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffc9a3c19b0 a2=0 a3=7ffc9a3c199c items=0 ppid=2223 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:58.422950 kernel: audit: type=1325 audit(1752579058.412:534): table=filter:124 family=2 entries=36 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:58.423003 kernel: audit: type=1300 audit(1752579058.412:534): arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffc9a3c19b0 a2=0 a3=7ffc9a3c199c items=0 ppid=2223 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:58.423023 kernel: audit: type=1327 audit(1752579058.412:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:58.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:58.428000 audit[5571]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:58.428000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc9a3c19b0 a2=0 a3=0 items=0 ppid=2223 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:58.435530 kernel: audit: type=1325 audit(1752579058.428:535): table=nat:125 family=2 entries=22 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:30:58.435826 kernel: audit: type=1300 audit(1752579058.428:535): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc9a3c19b0 a2=0 a3=0 items=0 ppid=2223 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:30:58.435855 kernel: audit: type=1327 audit(1752579058.428:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:58.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:30:58.568489 systemd[1]: run-containerd-runc-k8s.io-d75dcc11552369c638d54e5f6e744db60a4ed7a5e7303f98b7128f74f5eb2298-runc.e17ErN.mount: Deactivated successfully. Jul 15 11:31:03.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:56738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:03.255286 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:56738.service. Jul 15 11:31:03.260670 kernel: audit: type=1130 audit(1752579063.254:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:56738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:03.292000 audit[5596]: USER_ACCT pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.293789 sshd[5596]: Accepted publickey for core from 10.0.0.1 port 56738 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:31:03.295424 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:31:03.294000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.299765 systemd-logind[1289]: New session 22 of user core. Jul 15 11:31:03.300583 systemd[1]: Started session-22.scope. Jul 15 11:31:03.303445 kernel: audit: type=1101 audit(1752579063.292:537): pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.303559 kernel: audit: type=1103 audit(1752579063.294:538): pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.303590 kernel: audit: type=1006 audit(1752579063.294:539): pid=5596 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 15 11:31:03.294000 audit[5596]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff59286f60 a2=3 a3=0 items=0 ppid=1 pid=5596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:03.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:03.308871 kernel: audit: type=1300 audit(1752579063.294:539): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff59286f60 a2=3 a3=0 items=0 ppid=1 pid=5596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:03.308916 kernel: audit: type=1327 audit(1752579063.294:539): proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:03.308944 kernel: audit: type=1105 audit(1752579063.304:540): pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.304000 audit[5596]: USER_START pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.312971 kernel: audit: type=1103 audit(1752579063.306:541): pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.306000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.428459 sshd[5596]: pam_unix(sshd:session): session closed for user core Jul 15 11:31:03.428000 audit[5596]: USER_END pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.431224 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:56738.service: Deactivated successfully. Jul 15 11:31:03.432297 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:31:03.432664 systemd-logind[1289]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:31:03.433751 systemd-logind[1289]: Removed session 22. Jul 15 11:31:03.428000 audit[5596]: CRED_DISP pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.437957 kernel: audit: type=1106 audit(1752579063.428:542): pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.437998 kernel: audit: type=1104 audit(1752579063.428:543): pid=5596 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:03.438026 kernel: audit: type=1131 audit(1752579063.428:544): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:56738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:03.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:56738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:04.432790 kubelet[2092]: I0715 11:31:04.432737 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:31:04.672000 audit[5611]: NETFILTER_CFG table=filter:126 family=2 entries=24 op=nft_register_rule pid=5611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:04.672000 audit[5611]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc21cf5b50 a2=0 a3=7ffc21cf5b3c items=0 ppid=2223 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:04.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:04.680593 kubelet[2092]: I0715 11:31:04.678471 2092 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-phmvm" podStartSLOduration=52.662865105 podStartE2EDuration="1m8.678455944s" podCreationTimestamp="2025-07-15 11:29:56 +0000 UTC" firstStartedPulling="2025-07-15 11:30:39.69634519 +0000 UTC m=+61.724344424" lastFinishedPulling="2025-07-15 11:30:55.711936029 +0000 UTC m=+77.739935263" observedRunningTime="2025-07-15 11:30:56.568699989 +0000 UTC m=+78.596699223" watchObservedRunningTime="2025-07-15 11:31:04.678455944 +0000 UTC m=+86.706455178" Jul 15 11:31:04.681000 audit[5611]: NETFILTER_CFG table=nat:127 family=2 entries=106 op=nft_register_chain pid=5611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:04.681000 audit[5611]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc21cf5b50 a2=0 a3=7ffc21cf5b3c items=0 ppid=2223 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:04.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:04.708000 audit[5614]: NETFILTER_CFG table=filter:128 family=2 entries=11 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:04.708000 audit[5614]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc2304a520 a2=0 a3=7ffc2304a50c items=0 ppid=2223 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:04.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:04.714000 audit[5614]: NETFILTER_CFG table=nat:129 family=2 entries=53 op=nft_register_chain pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:04.714000 audit[5614]: SYSCALL arch=c000003e syscall=46 success=yes exit=19332 a0=3 a1=7ffc2304a520 a2=0 a3=7ffc2304a50c items=0 ppid=2223 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:04.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:08.078336 kubelet[2092]: E0715 11:31:08.078293 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:31:08.432102 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:56746.service. Jul 15 11:31:08.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.41:22-10.0.0.1:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:08.433041 kernel: kauditd_printk_skb: 12 callbacks suppressed Jul 15 11:31:08.433154 kernel: audit: type=1130 audit(1752579068.431:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.41:22-10.0.0.1:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:08.466000 audit[5615]: USER_ACCT pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.467479 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 56746 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:31:08.470000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.471608 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:31:08.474722 kernel: audit: type=1101 audit(1752579068.466:550): pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.474775 kernel: audit: type=1103 audit(1752579068.470:551): pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.474952 kernel: audit: type=1006 audit(1752579068.470:552): pid=5615 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 15 11:31:08.475485 systemd-logind[1289]: New session 23 of user core. Jul 15 11:31:08.475803 systemd[1]: Started session-23.scope. Jul 15 11:31:08.481665 kernel: audit: type=1300 audit(1752579068.470:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc183a9850 a2=3 a3=0 items=0 ppid=1 pid=5615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:08.481739 kernel: audit: type=1327 audit(1752579068.470:552): proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:08.470000 audit[5615]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc183a9850 a2=3 a3=0 items=0 ppid=1 pid=5615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:08.470000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:08.482122 kernel: audit: type=1105 audit(1752579068.479:553): pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.479000 audit[5615]: USER_START pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.480000 audit[5618]: CRED_ACQ pid=5618 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.490491 kernel: audit: type=1103 audit(1752579068.480:554): pid=5618 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.575077 sshd[5615]: pam_unix(sshd:session): session closed for user core Jul 15 11:31:08.575000 audit[5615]: USER_END pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.577448 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:56746.service: Deactivated successfully. Jul 15 11:31:08.578336 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:31:08.578390 systemd-logind[1289]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:31:08.579094 systemd-logind[1289]: Removed session 23. Jul 15 11:31:08.575000 audit[5615]: CRED_DISP pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.583462 kernel: audit: type=1106 audit(1752579068.575:555): pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.583527 kernel: audit: type=1104 audit(1752579068.575:556): pid=5615 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:08.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.41:22-10.0.0.1:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:13.579042 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:59268.service. Jul 15 11:31:13.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:59268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:13.580579 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:31:13.580667 kernel: audit: type=1130 audit(1752579073.578:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:59268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:13.614000 audit[5657]: USER_ACCT pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.615091 sshd[5657]: Accepted publickey for core from 10.0.0.1 port 59268 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:31:13.618000 audit[5657]: CRED_ACQ pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.619922 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:31:13.623369 kernel: audit: type=1101 audit(1752579073.614:559): pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.623418 kernel: audit: type=1103 audit(1752579073.618:560): pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.623449 kernel: audit: type=1006 audit(1752579073.618:561): pid=5657 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 15 11:31:13.630328 kernel: audit: type=1300 audit(1752579073.618:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc13269c70 a2=3 a3=0 items=0 ppid=1 pid=5657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:13.618000 audit[5657]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc13269c70 a2=3 a3=0 items=0 ppid=1 pid=5657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:13.627927 systemd[1]: Started session-24.scope. Jul 15 11:31:13.628698 systemd-logind[1289]: New session 24 of user core. Jul 15 11:31:13.618000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:13.632679 kernel: audit: type=1327 audit(1752579073.618:561): proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:13.643000 audit[5657]: USER_START pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.650661 kernel: audit: type=1105 audit(1752579073.643:562): pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.649000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.655672 kernel: audit: type=1103 audit(1752579073.649:563): pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.774475 sshd[5657]: pam_unix(sshd:session): session closed for user core Jul 15 11:31:13.774000 audit[5657]: USER_END pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.774000 audit[5657]: CRED_DISP pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.783816 kernel: audit: type=1106 audit(1752579073.774:564): pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.783874 kernel: audit: type=1104 audit(1752579073.774:565): pid=5657 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:13.784409 systemd-logind[1289]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:31:13.785792 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:59268.service: Deactivated successfully. Jul 15 11:31:13.786488 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:31:13.787971 systemd-logind[1289]: Removed session 24. Jul 15 11:31:13.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:59268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:14.290684 kubelet[2092]: I0715 11:31:14.290633 2092 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:31:14.554000 audit[5674]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=5674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:14.554000 audit[5674]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd908a7480 a2=0 a3=7ffd908a746c items=0 ppid=2223 pid=5674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:14.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:14.559000 audit[5674]: NETFILTER_CFG table=nat:131 family=2 entries=60 op=nft_register_chain pid=5674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:14.559000 audit[5674]: SYSCALL arch=c000003e syscall=46 success=yes exit=21220 a0=3 a1=7ffd908a7480 a2=0 a3=7ffd908a746c items=0 ppid=2223 pid=5674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:14.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:15.437000 audit[5719]: NETFILTER_CFG table=filter:132 family=2 entries=9 op=nft_register_rule pid=5719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:15.437000 audit[5719]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd72ccf440 a2=0 a3=7ffd72ccf42c items=0 ppid=2223 pid=5719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:15.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:15.443000 audit[5719]: NETFILTER_CFG table=nat:133 family=2 entries=55 op=nft_register_chain pid=5719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:31:15.443000 audit[5719]: SYSCALL arch=c000003e syscall=46 success=yes exit=20100 a0=3 a1=7ffd72ccf440 a2=0 a3=7ffd72ccf42c items=0 ppid=2223 pid=5719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:15.443000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:31:17.078071 kubelet[2092]: E0715 11:31:17.078027 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:31:18.777130 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:59276.service. Jul 15 11:31:18.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:59276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:18.778715 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 15 11:31:18.778850 kernel: audit: type=1130 audit(1752579078.775:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:59276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:31:18.883000 audit[5720]: USER_ACCT pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.890665 kernel: audit: type=1101 audit(1752579078.883:572): pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.890707 kernel: audit: type=1103 audit(1752579078.888:573): pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.888000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.890772 sshd[5720]: Accepted publickey for core from 10.0.0.1 port 59276 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:31:18.891045 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:31:18.895473 systemd[1]: Started session-25.scope. Jul 15 11:31:18.896437 systemd-logind[1289]: New session 25 of user core. Jul 15 11:31:18.897286 kernel: audit: type=1006 audit(1752579078.888:574): pid=5720 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 15 11:31:18.888000 audit[5720]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebaea4cc0 a2=3 a3=0 items=0 ppid=1 pid=5720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:18.902402 kernel: audit: type=1300 audit(1752579078.888:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebaea4cc0 a2=3 a3=0 items=0 ppid=1 pid=5720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:31:18.903802 kernel: audit: type=1327 audit(1752579078.888:574): proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:18.888000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:31:18.901000 audit[5720]: USER_START pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.908051 kernel: audit: type=1105 audit(1752579078.901:575): pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.903000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:18.911693 kernel: audit: type=1103 audit(1752579078.903:576): pid=5723 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:19.044426 sshd[5720]: pam_unix(sshd:session): session closed for user core Jul 15 11:31:19.043000 audit[5720]: USER_END pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:19.047292 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:59276.service: Deactivated successfully. Jul 15 11:31:19.048058 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:31:19.048732 systemd-logind[1289]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:31:19.049401 systemd-logind[1289]: Removed session 25. Jul 15 11:31:19.049659 kernel: audit: type=1106 audit(1752579079.043:577): pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:19.049709 kernel: audit: type=1104 audit(1752579079.043:578): pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:19.043000 audit[5720]: CRED_DISP pid=5720 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:31:19.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:59276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'