Sep 13 00:53:09.912407 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:53:09.912438 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:09.912447 kernel: BIOS-provided physical RAM map: Sep 13 00:53:09.912453 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:53:09.912458 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:53:09.912463 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:53:09.912470 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:53:09.912476 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:53:09.912484 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:53:09.912489 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:53:09.912495 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:53:09.912500 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:53:09.912506 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:53:09.912511 kernel: NX (Execute Disable) protection: active Sep 13 00:53:09.912520 kernel: SMBIOS 2.8 present. Sep 13 00:53:09.912526 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:53:09.912532 kernel: Hypervisor detected: KVM Sep 13 00:53:09.912538 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:53:09.912547 kernel: kvm-clock: cpu 0, msr 2119f001, primary cpu clock Sep 13 00:53:09.912553 kernel: kvm-clock: using sched offset of 3461801157 cycles Sep 13 00:53:09.912580 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:53:09.912587 kernel: tsc: Detected 2794.750 MHz processor Sep 13 00:53:09.912593 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:53:09.912602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:53:09.912608 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:53:09.912614 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:53:09.912621 kernel: Using GB pages for direct mapping Sep 13 00:53:09.912627 kernel: ACPI: Early table checksum verification disabled Sep 13 00:53:09.912633 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:53:09.912639 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912646 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912652 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912659 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:53:09.912665 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912672 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912678 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912684 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:09.912690 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:53:09.912696 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:53:09.912703 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:53:09.912713 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:53:09.912719 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:53:09.912726 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:53:09.912732 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:53:09.912739 kernel: No NUMA configuration found Sep 13 00:53:09.912745 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:53:09.912753 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:53:09.912767 kernel: Zone ranges: Sep 13 00:53:09.912773 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:53:09.912780 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:53:09.912814 kernel: Normal empty Sep 13 00:53:09.912828 kernel: Movable zone start for each node Sep 13 00:53:09.912839 kernel: Early memory node ranges Sep 13 00:53:09.912846 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:53:09.912853 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:53:09.912862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:53:09.912872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:53:09.912878 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:53:09.912885 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:53:09.912892 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:53:09.912899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:53:09.912905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:53:09.912912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:53:09.912918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:53:09.912925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:53:09.912935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:53:09.912942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:53:09.912949 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:53:09.912955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:53:09.912962 kernel: TSC deadline timer available Sep 13 00:53:09.912968 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:53:09.912975 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:53:09.912981 kernel: kvm-guest: setup PV sched yield Sep 13 00:53:09.912988 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:53:09.912996 kernel: Booting paravirtualized kernel on KVM Sep 13 00:53:09.913003 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:53:09.913019 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:53:09.913026 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:53:09.913041 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:53:09.913049 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:53:09.913055 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:53:09.913062 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 13 00:53:09.913069 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:53:09.913078 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:53:09.913084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:53:09.913091 kernel: Policy zone: DMA32 Sep 13 00:53:09.913099 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:09.913106 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:53:09.913112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:53:09.913119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:09.913126 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:53:09.913134 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 13 00:53:09.913141 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:53:09.913148 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:53:09.913154 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:53:09.913161 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:53:09.913168 kernel: rcu: RCU event tracing is enabled. Sep 13 00:53:09.913175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:53:09.913182 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:53:09.913188 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:53:09.913196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:53:09.913203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:53:09.913210 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:53:09.913216 kernel: random: crng init done Sep 13 00:53:09.913223 kernel: Console: colour VGA+ 80x25 Sep 13 00:53:09.913229 kernel: printk: console [ttyS0] enabled Sep 13 00:53:09.913236 kernel: ACPI: Core revision 20210730 Sep 13 00:53:09.913243 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:53:09.913249 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:53:09.913257 kernel: x2apic enabled Sep 13 00:53:09.913264 kernel: Switched APIC routing to physical x2apic. Sep 13 00:53:09.913273 kernel: kvm-guest: setup PV IPIs Sep 13 00:53:09.913279 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:53:09.913286 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:53:09.913295 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 13 00:53:09.913301 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:53:09.913308 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:53:09.913315 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:53:09.913329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:53:09.913336 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:53:09.913343 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:53:09.913351 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:53:09.913358 kernel: active return thunk: retbleed_return_thunk Sep 13 00:53:09.913365 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:53:09.913372 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:53:09.913379 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:53:09.913386 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:53:09.913394 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:53:09.913401 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:53:09.913408 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:53:09.913416 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:53:09.913431 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:53:09.913438 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:53:09.913445 kernel: LSM: Security Framework initializing Sep 13 00:53:09.913453 kernel: SELinux: Initializing. Sep 13 00:53:09.913460 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:09.913467 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:09.913474 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:53:09.913481 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:53:09.913488 kernel: ... version: 0 Sep 13 00:53:09.913495 kernel: ... bit width: 48 Sep 13 00:53:09.913502 kernel: ... generic registers: 6 Sep 13 00:53:09.913509 kernel: ... value mask: 0000ffffffffffff Sep 13 00:53:09.913517 kernel: ... max period: 00007fffffffffff Sep 13 00:53:09.913524 kernel: ... fixed-purpose events: 0 Sep 13 00:53:09.913531 kernel: ... event mask: 000000000000003f Sep 13 00:53:09.913538 kernel: signal: max sigframe size: 1776 Sep 13 00:53:09.913545 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:53:09.913552 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:53:09.913569 kernel: x86: Booting SMP configuration: Sep 13 00:53:09.913587 kernel: .... node #0, CPUs: #1 Sep 13 00:53:09.913594 kernel: kvm-clock: cpu 1, msr 2119f041, secondary cpu clock Sep 13 00:53:09.913602 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:53:09.913609 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 13 00:53:09.913616 kernel: #2 Sep 13 00:53:09.913623 kernel: kvm-clock: cpu 2, msr 2119f081, secondary cpu clock Sep 13 00:53:09.913630 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:53:09.913637 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 13 00:53:09.913647 kernel: #3 Sep 13 00:53:09.913654 kernel: kvm-clock: cpu 3, msr 2119f0c1, secondary cpu clock Sep 13 00:53:09.913661 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:53:09.913668 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 13 00:53:09.913677 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:53:09.913684 kernel: smpboot: Max logical packages: 1 Sep 13 00:53:09.913691 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 13 00:53:09.913698 kernel: devtmpfs: initialized Sep 13 00:53:09.913705 kernel: x86/mm: Memory block size: 128MB Sep 13 00:53:09.913712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:53:09.913719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:53:09.913726 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:53:09.913733 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:53:09.913741 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:53:09.913748 kernel: audit: type=2000 audit(1757724788.848:1): state=initialized audit_enabled=0 res=1 Sep 13 00:53:09.913755 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:53:09.913762 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:53:09.913769 kernel: cpuidle: using governor menu Sep 13 00:53:09.913776 kernel: ACPI: bus type PCI registered Sep 13 00:53:09.913783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:53:09.913789 kernel: dca service started, version 1.12.1 Sep 13 00:53:09.913797 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:53:09.913805 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:53:09.913812 kernel: PCI: Using configuration type 1 for base access Sep 13 00:53:09.913819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:53:09.913826 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:53:09.913833 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:53:09.913840 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:53:09.913847 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:53:09.913854 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:53:09.913861 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:53:09.913870 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:53:09.913876 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:53:09.913883 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:53:09.913890 kernel: ACPI: Interpreter enabled Sep 13 00:53:09.913897 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:53:09.913904 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:53:09.913911 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:53:09.913918 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:53:09.913925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:53:09.914082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:53:09.914164 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:53:09.914239 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:53:09.914248 kernel: PCI host bridge to bus 0000:00 Sep 13 00:53:09.914352 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:53:09.914431 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:53:09.914510 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:53:09.914608 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:53:09.914675 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:09.914743 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:09.914811 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:53:09.914912 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:53:09.915004 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:53:09.915086 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:53:09.915162 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:53:09.915272 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:53:09.915360 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:53:09.915469 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:53:09.915585 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:53:09.915678 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:53:09.915761 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:53:09.915851 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:53:09.915930 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:53:09.916007 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:53:09.916082 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:53:09.916174 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:53:09.916259 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:53:09.916336 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:53:09.916415 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:53:09.916502 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:53:09.916636 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:53:09.916714 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:53:09.916804 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:53:09.916884 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:53:09.916958 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:53:09.917063 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:53:09.917141 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:53:09.917150 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:53:09.917158 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:53:09.917165 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:53:09.917172 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:53:09.917182 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:53:09.917190 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:53:09.917197 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:53:09.917203 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:53:09.917210 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:53:09.917217 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:53:09.917224 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:53:09.917231 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:53:09.917238 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:53:09.917247 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:53:09.917254 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:53:09.917261 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:53:09.917268 kernel: iommu: Default domain type: Translated Sep 13 00:53:09.917275 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:53:09.917352 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:53:09.917476 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:53:09.917555 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:53:09.917580 kernel: vgaarb: loaded Sep 13 00:53:09.917597 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:53:09.917606 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:53:09.917613 kernel: PTP clock support registered Sep 13 00:53:09.917620 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:53:09.917627 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:53:09.917634 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:53:09.917641 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:53:09.917648 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:53:09.917663 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:53:09.917670 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:53:09.917677 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:53:09.917684 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:53:09.917691 kernel: pnp: PnP ACPI init Sep 13 00:53:09.917798 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:53:09.917809 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:53:09.917816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:53:09.917826 kernel: NET: Registered PF_INET protocol family Sep 13 00:53:09.917833 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:53:09.917840 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:53:09.917847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:53:09.917855 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:53:09.917862 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:53:09.917869 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:53:09.917876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:09.917883 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:09.917892 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:53:09.917899 kernel: NET: Registered PF_XDP protocol family Sep 13 00:53:09.917968 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:53:09.918038 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:53:09.918105 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:53:09.918172 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:53:09.918239 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:09.918305 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:09.918317 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:53:09.918324 kernel: Initialise system trusted keyrings Sep 13 00:53:09.918331 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:53:09.918338 kernel: Key type asymmetric registered Sep 13 00:53:09.918345 kernel: Asymmetric key parser 'x509' registered Sep 13 00:53:09.918352 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:53:09.918360 kernel: io scheduler mq-deadline registered Sep 13 00:53:09.918368 kernel: io scheduler kyber registered Sep 13 00:53:09.918376 kernel: io scheduler bfq registered Sep 13 00:53:09.918384 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:53:09.918394 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:53:09.918402 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:53:09.918409 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:53:09.918416 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:53:09.918430 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:53:09.918438 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:53:09.918445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:53:09.918452 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:53:09.918547 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:53:09.918574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:53:09.918646 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:53:09.918717 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:53:09 UTC (1757724789) Sep 13 00:53:09.918786 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:53:09.918796 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:53:09.918803 kernel: Segment Routing with IPv6 Sep 13 00:53:09.918810 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:53:09.918821 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:53:09.918828 kernel: Key type dns_resolver registered Sep 13 00:53:09.918835 kernel: IPI shorthand broadcast: enabled Sep 13 00:53:09.918842 kernel: sched_clock: Marking stable (416487504, 100240411)->(578570780, -61842865) Sep 13 00:53:09.918849 kernel: registered taskstats version 1 Sep 13 00:53:09.918856 kernel: Loading compiled-in X.509 certificates Sep 13 00:53:09.918863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:53:09.918870 kernel: Key type .fscrypt registered Sep 13 00:53:09.918877 kernel: Key type fscrypt-provisioning registered Sep 13 00:53:09.918885 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:53:09.918893 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:53:09.918900 kernel: ima: No architecture policies found Sep 13 00:53:09.918907 kernel: clk: Disabling unused clocks Sep 13 00:53:09.918914 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:53:09.918921 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:53:09.918928 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:53:09.918935 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:53:09.918942 kernel: Run /init as init process Sep 13 00:53:09.918951 kernel: with arguments: Sep 13 00:53:09.918957 kernel: /init Sep 13 00:53:09.918964 kernel: with environment: Sep 13 00:53:09.918971 kernel: HOME=/ Sep 13 00:53:09.918978 kernel: TERM=linux Sep 13 00:53:09.918985 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:53:09.918995 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:09.919004 systemd[1]: Detected virtualization kvm. Sep 13 00:53:09.919014 systemd[1]: Detected architecture x86-64. Sep 13 00:53:09.919021 systemd[1]: Running in initrd. Sep 13 00:53:09.919028 systemd[1]: No hostname configured, using default hostname. Sep 13 00:53:09.919036 systemd[1]: Hostname set to . Sep 13 00:53:09.919043 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:09.919051 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:53:09.919059 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:09.919066 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:09.919075 systemd[1]: Reached target paths.target. Sep 13 00:53:09.919082 systemd[1]: Reached target slices.target. Sep 13 00:53:09.919098 systemd[1]: Reached target swap.target. Sep 13 00:53:09.919107 systemd[1]: Reached target timers.target. Sep 13 00:53:09.919116 systemd[1]: Listening on iscsid.socket. Sep 13 00:53:09.919123 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:53:09.919132 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:09.919140 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:09.919148 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:09.919155 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:09.919163 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:09.919171 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:09.919178 systemd[1]: Reached target sockets.target. Sep 13 00:53:09.919186 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:09.919196 systemd[1]: Finished network-cleanup.service. Sep 13 00:53:09.919203 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:53:09.919211 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:09.919219 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:09.919227 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:09.919234 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:53:09.919242 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:09.919250 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:53:09.919261 systemd-journald[197]: Journal started Sep 13 00:53:09.919307 systemd-journald[197]: Runtime Journal (/run/log/journal/86b8d5ccf97e4fada076ab2924c2171d) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:53:09.913061 systemd-modules-load[198]: Inserted module 'overlay' Sep 13 00:53:09.956467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:53:09.956494 kernel: Bridge firewalling registered Sep 13 00:53:09.933289 systemd-resolved[199]: Positive Trust Anchors: Sep 13 00:53:09.933305 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:09.962183 kernel: audit: type=1130 audit(1757724789.957:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.962202 systemd[1]: Started systemd-journald.service. Sep 13 00:53:09.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.933334 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:09.935589 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 13 00:53:09.946148 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 13 00:53:09.969089 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:09.981653 kernel: audit: type=1130 audit(1757724789.968:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.981682 kernel: SCSI subsystem initialized Sep 13 00:53:09.981693 kernel: audit: type=1130 audit(1757724789.968:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.981705 kernel: audit: type=1130 audit(1757724789.971:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.969695 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:53:09.972077 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:09.983139 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:53:09.985456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:09.991582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:09.998293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:53:09.998315 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:53:09.998325 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:53:09.998339 kernel: audit: type=1130 audit(1757724789.993:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.998471 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 13 00:53:09.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.003636 kernel: audit: type=1130 audit(1757724789.998:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.998474 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:53:09.999734 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:53:10.003810 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:10.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.010580 kernel: audit: type=1130 audit(1757724790.004:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.010682 dracut-cmdline[219]: dracut-dracut-053 Sep 13 00:53:10.005995 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:10.012437 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:10.017835 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:10.022078 kernel: audit: type=1130 audit(1757724790.017:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.075603 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:53:10.091592 kernel: iscsi: registered transport (tcp) Sep 13 00:53:10.112909 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:53:10.112951 kernel: QLogic iSCSI HBA Driver Sep 13 00:53:10.137653 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:53:10.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.140034 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:53:10.143401 kernel: audit: type=1130 audit(1757724790.138:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.187589 kernel: raid6: avx2x4 gen() 26232 MB/s Sep 13 00:53:10.204588 kernel: raid6: avx2x4 xor() 7065 MB/s Sep 13 00:53:10.221589 kernel: raid6: avx2x2 gen() 30571 MB/s Sep 13 00:53:10.238581 kernel: raid6: avx2x2 xor() 19202 MB/s Sep 13 00:53:10.255588 kernel: raid6: avx2x1 gen() 26556 MB/s Sep 13 00:53:10.272589 kernel: raid6: avx2x1 xor() 15210 MB/s Sep 13 00:53:10.289579 kernel: raid6: sse2x4 gen() 14803 MB/s Sep 13 00:53:10.306584 kernel: raid6: sse2x4 xor() 6351 MB/s Sep 13 00:53:10.323582 kernel: raid6: sse2x2 gen() 14251 MB/s Sep 13 00:53:10.340580 kernel: raid6: sse2x2 xor() 9773 MB/s Sep 13 00:53:10.357579 kernel: raid6: sse2x1 gen() 12399 MB/s Sep 13 00:53:10.374929 kernel: raid6: sse2x1 xor() 7801 MB/s Sep 13 00:53:10.374949 kernel: raid6: using algorithm avx2x2 gen() 30571 MB/s Sep 13 00:53:10.374959 kernel: raid6: .... xor() 19202 MB/s, rmw enabled Sep 13 00:53:10.375620 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:53:10.388587 kernel: xor: automatically using best checksumming function avx Sep 13 00:53:10.477596 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:53:10.486842 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:53:10.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.488000 audit: BPF prog-id=7 op=LOAD Sep 13 00:53:10.488000 audit: BPF prog-id=8 op=LOAD Sep 13 00:53:10.489466 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:10.501826 systemd-udevd[402]: Using default interface naming scheme 'v252'. Sep 13 00:53:10.505728 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:10.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.508554 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:53:10.518522 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:53:10.544265 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:53:10.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.546721 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:10.581029 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:10.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.609591 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:53:10.615671 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:53:10.615685 kernel: GPT:9289727 != 19775487 Sep 13 00:53:10.615693 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:53:10.615702 kernel: GPT:9289727 != 19775487 Sep 13 00:53:10.615716 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:53:10.615725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:10.619591 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:53:10.629023 kernel: libata version 3.00 loaded. Sep 13 00:53:10.639590 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:53:10.639615 kernel: AES CTR mode by8 optimization enabled Sep 13 00:53:10.647602 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:53:10.665516 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:53:10.665533 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:53:10.665678 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:53:10.665760 kernel: scsi host0: ahci Sep 13 00:53:10.665868 kernel: scsi host1: ahci Sep 13 00:53:10.665959 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (449) Sep 13 00:53:10.665969 kernel: scsi host2: ahci Sep 13 00:53:10.666062 kernel: scsi host3: ahci Sep 13 00:53:10.666162 kernel: scsi host4: ahci Sep 13 00:53:10.666254 kernel: scsi host5: ahci Sep 13 00:53:10.666364 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:53:10.666374 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:53:10.666383 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:53:10.666392 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:53:10.666401 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:53:10.666421 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:53:10.660391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:53:10.693289 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:53:10.700655 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:53:10.704643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:53:10.708668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:10.710348 systemd[1]: Starting disk-uuid.service... Sep 13 00:53:10.719354 disk-uuid[526]: Primary Header is updated. Sep 13 00:53:10.719354 disk-uuid[526]: Secondary Entries is updated. Sep 13 00:53:10.719354 disk-uuid[526]: Secondary Header is updated. Sep 13 00:53:10.722990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:10.725581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:10.974009 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:10.974101 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:10.974113 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:10.974125 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:10.975609 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:53:10.976599 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:10.977601 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:53:10.978990 kernel: ata3.00: applying bridge limits Sep 13 00:53:10.979011 kernel: ata3.00: configured for UDMA/100 Sep 13 00:53:10.979594 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:53:11.010597 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:53:11.027392 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:53:11.027418 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:53:11.727427 disk-uuid[527]: The operation has completed successfully. Sep 13 00:53:11.729221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:11.751105 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:53:11.751184 systemd[1]: Finished disk-uuid.service. Sep 13 00:53:11.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.759507 systemd[1]: Starting verity-setup.service... Sep 13 00:53:11.771603 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:53:11.789922 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:53:11.791588 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:53:11.793451 systemd[1]: Finished verity-setup.service. Sep 13 00:53:11.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.852259 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:53:11.853739 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:53:11.853108 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:53:11.853755 systemd[1]: Starting ignition-setup.service... Sep 13 00:53:11.854989 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:53:11.863238 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:11.863269 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:11.863282 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:11.870742 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:53:11.879195 systemd[1]: Finished ignition-setup.service. Sep 13 00:53:11.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.880744 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:53:11.920913 ignition[645]: Ignition 2.14.0 Sep 13 00:53:11.921953 ignition[645]: Stage: fetch-offline Sep 13 00:53:11.922011 ignition[645]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:11.922020 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:11.922123 ignition[645]: parsed url from cmdline: "" Sep 13 00:53:11.924468 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:53:11.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.925000 audit: BPF prog-id=9 op=LOAD Sep 13 00:53:11.922127 ignition[645]: no config URL provided Sep 13 00:53:11.922132 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:53:11.927009 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:11.922142 ignition[645]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:53:11.922163 ignition[645]: op(1): [started] loading QEMU firmware config module Sep 13 00:53:11.922169 ignition[645]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:53:11.932853 ignition[645]: op(1): [finished] loading QEMU firmware config module Sep 13 00:53:11.950519 systemd-networkd[720]: lo: Link UP Sep 13 00:53:11.955610 systemd-networkd[720]: lo: Gained carrier Sep 13 00:53:11.956219 systemd-networkd[720]: Enumeration completed Sep 13 00:53:12.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.956442 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:12.001931 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:12.003748 systemd-networkd[720]: eth0: Link UP Sep 13 00:53:12.003751 systemd-networkd[720]: eth0: Gained carrier Sep 13 00:53:12.004573 systemd[1]: Reached target network.target. Sep 13 00:53:12.010726 systemd[1]: Starting iscsiuio.service... Sep 13 00:53:12.014856 systemd[1]: Started iscsiuio.service. Sep 13 00:53:12.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.017009 systemd[1]: Starting iscsid.service... Sep 13 00:53:12.020200 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:12.020200 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:53:12.020200 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:53:12.020200 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:53:12.020200 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:12.020200 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:53:12.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.021319 systemd[1]: Started iscsid.service. Sep 13 00:53:12.023593 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:53:12.033032 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:53:12.034755 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:53:12.036358 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:12.037345 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:12.038822 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:53:12.045988 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:53:12.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.058458 ignition[645]: parsing config with SHA512: b94afee18a17fefa8a53f453bc03fdf4618fea979bfe08810ea55a45628d9d3cabbabdde70b8cb1f5057fada7d5337352b3d333edb2237382e2836c9bfb7a87d Sep 13 00:53:12.065647 unknown[645]: fetched base config from "system" Sep 13 00:53:12.065659 unknown[645]: fetched user config from "qemu" Sep 13 00:53:12.066131 ignition[645]: fetch-offline: fetch-offline passed Sep 13 00:53:12.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.067320 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:53:12.066184 ignition[645]: Ignition finished successfully Sep 13 00:53:12.068475 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:53:12.069300 systemd[1]: Starting ignition-kargs.service... Sep 13 00:53:12.071677 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:53:12.077903 ignition[740]: Ignition 2.14.0 Sep 13 00:53:12.077913 ignition[740]: Stage: kargs Sep 13 00:53:12.078000 ignition[740]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:12.078011 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:12.080201 systemd[1]: Finished ignition-kargs.service. Sep 13 00:53:12.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.079024 ignition[740]: kargs: kargs passed Sep 13 00:53:12.079058 ignition[740]: Ignition finished successfully Sep 13 00:53:12.082867 systemd[1]: Starting ignition-disks.service... Sep 13 00:53:12.089716 ignition[747]: Ignition 2.14.0 Sep 13 00:53:12.089726 ignition[747]: Stage: disks Sep 13 00:53:12.089826 ignition[747]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:12.089835 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:12.093611 ignition[747]: disks: disks passed Sep 13 00:53:12.094226 ignition[747]: Ignition finished successfully Sep 13 00:53:12.095755 systemd[1]: Finished ignition-disks.service. Sep 13 00:53:12.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.096674 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:53:12.098082 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:12.098889 systemd[1]: Reached target local-fs.target. Sep 13 00:53:12.099289 systemd[1]: Reached target sysinit.target. Sep 13 00:53:12.100001 systemd[1]: Reached target basic.target. Sep 13 00:53:12.101027 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:53:12.192606 systemd-fsck[755]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:53:12.450630 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:53:12.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.452610 systemd[1]: Mounting sysroot.mount... Sep 13 00:53:12.459591 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:53:12.460318 systemd[1]: Mounted sysroot.mount. Sep 13 00:53:12.461897 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:53:12.464272 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:53:12.464868 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:53:12.464909 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:53:12.464931 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:53:12.472842 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:53:12.474277 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:53:12.480255 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:53:12.485419 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:53:12.488955 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:53:12.493673 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:53:12.521217 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:53:12.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.522947 systemd[1]: Starting ignition-mount.service... Sep 13 00:53:12.524238 systemd[1]: Starting sysroot-boot.service... Sep 13 00:53:12.528139 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:53:12.536196 ignition[808]: INFO : Ignition 2.14.0 Sep 13 00:53:12.536196 ignition[808]: INFO : Stage: mount Sep 13 00:53:12.537827 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:12.537827 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:12.537827 ignition[808]: INFO : mount: mount passed Sep 13 00:53:12.537827 ignition[808]: INFO : Ignition finished successfully Sep 13 00:53:12.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.537992 systemd[1]: Finished ignition-mount.service. Sep 13 00:53:12.543371 systemd[1]: Finished sysroot-boot.service. Sep 13 00:53:12.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:12.800365 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:53:12.806593 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 13 00:53:12.808682 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:12.808704 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:12.808714 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:12.812898 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:53:12.815490 systemd[1]: Starting ignition-files.service... Sep 13 00:53:12.828803 ignition[836]: INFO : Ignition 2.14.0 Sep 13 00:53:12.828803 ignition[836]: INFO : Stage: files Sep 13 00:53:12.830442 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:12.830442 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:12.830442 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:53:12.834380 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:53:12.834380 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:53:12.838034 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:53:12.838034 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:53:12.838034 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:53:12.838034 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:53:12.838034 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:53:12.838034 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:12.838034 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:53:12.835342 unknown[836]: wrote ssh authorized keys file for user: core Sep 13 00:53:12.881328 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:53:13.168596 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:13.168596 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:13.172215 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:53:13.517716 systemd-networkd[720]: eth0: Gained IPv6LL Sep 13 00:53:13.626339 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:53:13.931366 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:13.931366 ignition[836]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:53:13.935719 ignition[836]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:53:13.966402 ignition[836]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:53:13.966402 ignition[836]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:53:13.966402 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:13.966402 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:13.966402 ignition[836]: INFO : files: files passed Sep 13 00:53:13.966402 ignition[836]: INFO : Ignition finished successfully Sep 13 00:53:13.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.964911 systemd[1]: Finished ignition-files.service. Sep 13 00:53:13.967197 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:53:13.968998 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:53:13.983663 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:53:13.969944 systemd[1]: Starting ignition-quench.service... Sep 13 00:53:13.986065 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:53:13.972445 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:53:13.972530 systemd[1]: Finished ignition-quench.service. Sep 13 00:53:13.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:13.973922 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:53:13.976445 systemd[1]: Reached target ignition-complete.target. Sep 13 00:53:13.978476 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:53:13.988943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:53:13.989021 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:53:13.990297 systemd[1]: Reached target initrd-fs.target. Sep 13 00:53:13.991135 systemd[1]: Reached target initrd.target. Sep 13 00:53:13.992926 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:53:13.993540 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:53:14.003025 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:53:14.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.004422 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:53:14.011821 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:53:14.012694 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:53:14.014197 systemd[1]: Stopped target timers.target. Sep 13 00:53:14.015687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:53:14.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.015774 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:53:14.017203 systemd[1]: Stopped target initrd.target. Sep 13 00:53:14.018735 systemd[1]: Stopped target basic.target. Sep 13 00:53:14.020165 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:53:14.021685 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:53:14.023147 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:53:14.024763 systemd[1]: Stopped target remote-fs.target. Sep 13 00:53:14.026266 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:53:14.027914 systemd[1]: Stopped target sysinit.target. Sep 13 00:53:14.029408 systemd[1]: Stopped target local-fs.target. Sep 13 00:53:14.030958 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:53:14.032419 systemd[1]: Stopped target swap.target. Sep 13 00:53:14.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.033761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:53:14.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.033847 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:53:14.035316 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:53:14.036667 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:53:14.036752 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:53:14.037988 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:53:14.038072 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:53:14.038314 systemd[1]: Stopped target paths.target. Sep 13 00:53:14.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.038411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:53:14.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.041603 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:53:14.052390 iscsid[725]: iscsid shutting down. Sep 13 00:53:14.043089 systemd[1]: Stopped target slices.target. Sep 13 00:53:14.044513 systemd[1]: Stopped target sockets.target. Sep 13 00:53:14.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.046186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:53:14.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.046275 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:53:14.047928 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:53:14.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.070674 ignition[877]: INFO : Ignition 2.14.0 Sep 13 00:53:14.070674 ignition[877]: INFO : Stage: umount Sep 13 00:53:14.070674 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:14.070674 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:14.070674 ignition[877]: INFO : umount: umount passed Sep 13 00:53:14.070674 ignition[877]: INFO : Ignition finished successfully Sep 13 00:53:14.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.048008 systemd[1]: Stopped ignition-files.service. Sep 13 00:53:14.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.049903 systemd[1]: Stopping ignition-mount.service... Sep 13 00:53:14.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.051076 systemd[1]: Stopping iscsid.service... Sep 13 00:53:14.052986 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:53:14.053755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:53:14.053893 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:53:14.055366 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:53:14.055481 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:53:14.058619 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:53:14.058697 systemd[1]: Stopped iscsid.service. Sep 13 00:53:14.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.069097 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:53:14.069167 systemd[1]: Stopped ignition-mount.service. Sep 13 00:53:14.070954 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:53:14.071019 systemd[1]: Closed iscsid.socket. Sep 13 00:53:14.072274 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:53:14.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.072307 systemd[1]: Stopped ignition-disks.service. Sep 13 00:53:14.073807 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:53:14.135000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:53:14.073839 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:53:14.075723 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:53:14.075754 systemd[1]: Stopped ignition-setup.service. Sep 13 00:53:14.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.077718 systemd[1]: Stopping iscsiuio.service... Sep 13 00:53:14.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.079948 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:53:14.080366 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:53:14.080453 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:53:14.081897 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:53:14.081966 systemd[1]: Stopped iscsiuio.service. Sep 13 00:53:14.083988 systemd[1]: Stopped target network.target. Sep 13 00:53:14.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.084943 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:53:14.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.084991 systemd[1]: Closed iscsiuio.socket. Sep 13 00:53:14.086432 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:53:14.088187 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:53:14.091600 systemd-networkd[720]: eth0: DHCPv6 lease lost Sep 13 00:53:14.167000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:53:14.128368 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:53:14.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.128461 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:53:14.132035 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:53:14.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.132114 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:53:14.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.136477 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:53:14.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.136504 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:53:14.150173 systemd[1]: Stopping network-cleanup.service... Sep 13 00:53:14.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.150880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:53:14.150923 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:53:14.152764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:53:14.152798 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:53:14.154345 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:53:14.154379 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:53:14.155954 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:53:14.158374 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:53:14.161006 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:53:14.161087 systemd[1]: Stopped network-cleanup.service. Sep 13 00:53:14.162584 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:53:14.162698 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:53:14.164990 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:53:14.165023 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:53:14.166446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:53:14.166471 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:53:14.167998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:53:14.168034 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:53:14.169555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:53:14.169598 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:53:14.171215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:53:14.171248 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:53:14.173505 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:53:14.174599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:53:14.174642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:53:14.175598 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:53:14.175633 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:53:14.177027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:53:14.177060 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:53:14.178613 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:53:14.178975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:53:14.179046 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:53:14.294676 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:53:14.294805 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:53:14.301212 kernel: kauditd_printk_skb: 64 callbacks suppressed Sep 13 00:53:14.301240 kernel: audit: type=1131 audit(1757724794.296:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.296726 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:53:14.301364 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:53:14.307485 kernel: audit: type=1131 audit(1757724794.301:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:14.301409 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:53:14.302536 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:53:14.309475 systemd[1]: Switching root. Sep 13 00:53:14.312000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:53:14.312000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:53:14.314827 kernel: audit: type=1334 audit(1757724794.312:77): prog-id=8 op=UNLOAD Sep 13 00:53:14.314855 kernel: audit: type=1334 audit(1757724794.312:78): prog-id=7 op=UNLOAD Sep 13 00:53:14.314865 kernel: audit: type=1334 audit(1757724794.314:79): prog-id=5 op=UNLOAD Sep 13 00:53:14.314000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:53:14.315000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:53:14.316867 kernel: audit: type=1334 audit(1757724794.315:80): prog-id=4 op=UNLOAD Sep 13 00:53:14.316000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:53:14.317884 kernel: audit: type=1334 audit(1757724794.316:81): prog-id=3 op=UNLOAD Sep 13 00:53:14.334060 systemd-journald[197]: Journal stopped Sep 13 00:53:17.011862 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Sep 13 00:53:17.011971 kernel: audit: type=1335 audit(1757724794.333:82): pid=197 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Sep 13 00:53:17.012016 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:53:17.012036 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:53:17.012046 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:53:17.012057 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:53:17.012070 kernel: SELinux: policy capability open_perms=1 Sep 13 00:53:17.012082 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:53:17.012094 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:53:17.012108 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:53:17.012118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:53:17.012128 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:53:17.012155 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:53:17.012168 kernel: audit: type=1403 audit(1757724794.421:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:53:17.012180 systemd[1]: Successfully loaded SELinux policy in 38.098ms. Sep 13 00:53:17.012203 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.496ms. Sep 13 00:53:17.012215 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:17.012234 systemd[1]: Detected virtualization kvm. Sep 13 00:53:17.012255 systemd[1]: Detected architecture x86-64. Sep 13 00:53:17.012270 systemd[1]: Detected first boot. Sep 13 00:53:17.012281 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:17.012292 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:53:17.012306 kernel: audit: type=1400 audit(1757724794.843:84): avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:53:17.012319 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:53:17.012338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:17.012356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:17.012369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:17.012380 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:53:17.012397 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:53:17.012408 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:53:17.012418 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:53:17.012434 systemd[1]: Created slice system-getty.slice. Sep 13 00:53:17.012447 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:53:17.012458 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:53:17.012468 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:53:17.012479 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:53:17.012489 systemd[1]: Created slice user.slice. Sep 13 00:53:17.012499 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:17.012510 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:53:17.012520 systemd[1]: Set up automount boot.automount. Sep 13 00:53:17.012538 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:53:17.012549 systemd[1]: Reached target integritysetup.target. Sep 13 00:53:17.012585 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:17.012641 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:17.012656 systemd[1]: Reached target slices.target. Sep 13 00:53:17.012683 systemd[1]: Reached target swap.target. Sep 13 00:53:17.012697 systemd[1]: Reached target torcx.target. Sep 13 00:53:17.012710 systemd[1]: Reached target veritysetup.target. Sep 13 00:53:17.012728 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:53:17.012743 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:53:17.012759 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:17.012770 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:17.012781 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:17.012791 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:17.012802 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:17.012815 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:17.012826 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:53:17.012836 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:53:17.012851 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:53:17.012861 systemd[1]: Mounting media.mount... Sep 13 00:53:17.012872 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:17.012883 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:53:17.012893 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:53:17.012904 systemd[1]: Mounting tmp.mount... Sep 13 00:53:17.012914 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:53:17.012925 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.012946 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:17.012961 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:53:17.012975 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:17.012986 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:17.012996 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:17.013009 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:53:17.013020 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:17.013034 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:53:17.013046 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:53:17.013056 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:53:17.013068 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:17.013078 kernel: fuse: init (API version 7.34) Sep 13 00:53:17.013088 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:17.013099 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:53:17.013109 kernel: loop: module loaded Sep 13 00:53:17.013119 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:53:17.013135 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:17.013145 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:17.013159 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:53:17.013171 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:53:17.013184 systemd-journald[1018]: Journal started Sep 13 00:53:17.013240 systemd-journald[1018]: Runtime Journal (/run/log/journal/86b8d5ccf97e4fada076ab2924c2171d) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:53:16.929000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:53:16.929000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:53:17.010000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:53:17.010000 audit[1018]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe5b3bd930 a2=4000 a3=7ffe5b3bd9cc items=0 ppid=1 pid=1018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:17.010000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:53:17.017926 systemd[1]: Started systemd-journald.service. Sep 13 00:53:17.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.016882 systemd[1]: Mounted media.mount. Sep 13 00:53:17.017750 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:53:17.018630 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:53:17.019503 systemd[1]: Mounted tmp.mount. Sep 13 00:53:17.020581 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:53:17.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.021934 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:17.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.022987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:53:17.023168 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:53:17.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.024338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:17.024518 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:17.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.025655 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:17.025877 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:17.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.026950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:17.027188 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:17.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.028340 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:53:17.028578 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:53:17.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.029584 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:17.029767 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:17.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.031334 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:17.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.032855 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:53:17.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.034307 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:53:17.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.035506 systemd[1]: Reached target network-pre.target. Sep 13 00:53:17.037493 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:53:17.039578 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:53:17.040728 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:53:17.043009 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:53:17.045318 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:53:17.046469 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:17.047818 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:53:17.049560 systemd-journald[1018]: Time spent on flushing to /var/log/journal/86b8d5ccf97e4fada076ab2924c2171d is 28.608ms for 1038 entries. Sep 13 00:53:17.049560 systemd-journald[1018]: System Journal (/var/log/journal/86b8d5ccf97e4fada076ab2924c2171d) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:53:17.092447 systemd-journald[1018]: Received client request to flush runtime journal. Sep 13 00:53:17.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.049107 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.051168 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:17.053485 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:53:17.057104 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:53:17.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.058280 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:53:17.094916 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:53:17.067267 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:53:17.068928 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:17.070238 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:53:17.072381 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:53:17.074830 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:53:17.076773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:17.086395 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:17.093458 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:53:17.103779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:17.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.468925 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:53:17.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.470977 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:17.488166 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Sep 13 00:53:17.499851 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:17.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.502312 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:17.506808 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:53:17.539513 systemd[1]: Started systemd-userdbd.service. Sep 13 00:53:17.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.544125 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:53:17.559066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:17.572613 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:53:17.576589 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:53:17.585497 systemd-networkd[1078]: lo: Link UP Sep 13 00:53:17.585834 systemd-networkd[1078]: lo: Gained carrier Sep 13 00:53:17.586249 systemd-networkd[1078]: Enumeration completed Sep 13 00:53:17.586342 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:17.586895 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:17.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.588209 systemd-networkd[1078]: eth0: Link UP Sep 13 00:53:17.588316 systemd-networkd[1078]: eth0: Gained carrier Sep 13 00:53:17.591000 audit[1080]: AVC avc: denied { confidentiality } for pid=1080 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:53:17.603762 systemd-networkd[1078]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:53:17.591000 audit[1080]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56003cd98b00 a1=338ec a2=7ff310a3bbc5 a3=5 items=110 ppid=1069 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:17.591000 audit: CWD cwd="/" Sep 13 00:53:17.591000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=1 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=2 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=3 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=4 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=5 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=6 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=7 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=8 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=9 name=(null) inode=15442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=10 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=11 name=(null) inode=15443 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=12 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=13 name=(null) inode=15444 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=14 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=15 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=16 name=(null) inode=15441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=17 name=(null) inode=15446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=18 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=19 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=20 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=21 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=22 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=23 name=(null) inode=15449 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=24 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=25 name=(null) inode=15450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=26 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=27 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=28 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=29 name=(null) inode=15452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=30 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=31 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=32 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=33 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=34 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=35 name=(null) inode=15455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=36 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=37 name=(null) inode=15456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=38 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=39 name=(null) inode=15457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=40 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=41 name=(null) inode=15458 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=42 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=43 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=44 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=45 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=46 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=47 name=(null) inode=15461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=48 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=49 name=(null) inode=15462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=50 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=51 name=(null) inode=15463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=52 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=53 name=(null) inode=15464 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=55 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=56 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=57 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=58 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=59 name=(null) inode=15467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=60 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=61 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=62 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=63 name=(null) inode=15469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=64 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=65 name=(null) inode=15470 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=66 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=67 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=68 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=69 name=(null) inode=15472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=70 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=71 name=(null) inode=15473 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=72 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=73 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=74 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=75 name=(null) inode=15475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=76 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=77 name=(null) inode=15476 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=78 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=79 name=(null) inode=15477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=80 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=81 name=(null) inode=15478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=82 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=83 name=(null) inode=15479 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=84 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.623642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:53:17.591000 audit: PATH item=85 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=86 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=87 name=(null) inode=15481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=88 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=89 name=(null) inode=15482 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=90 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=91 name=(null) inode=15483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=92 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=93 name=(null) inode=15484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=94 name=(null) inode=15480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=95 name=(null) inode=15485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=96 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=97 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=98 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=99 name=(null) inode=15487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=100 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=101 name=(null) inode=15488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=102 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=103 name=(null) inode=15489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=104 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=105 name=(null) inode=15490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=106 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=107 name=(null) inode=15491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PATH item=109 name=(null) inode=15492 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:17.591000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:53:17.638633 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:53:17.649103 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:53:17.649121 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:53:17.649247 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:53:17.679617 kernel: kvm: Nested Virtualization enabled Sep 13 00:53:17.679825 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:53:17.679860 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:53:17.679892 kernel: SVM: Virtual GIF supported Sep 13 00:53:17.699589 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:53:17.724029 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:53:17.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.726724 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:53:17.735136 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:17.764922 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:53:17.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.765985 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:17.768014 systemd[1]: Starting lvm2-activation.service... Sep 13 00:53:17.772372 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:17.802994 systemd[1]: Finished lvm2-activation.service. Sep 13 00:53:17.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.804109 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:17.805054 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:53:17.805077 systemd[1]: Reached target local-fs.target. Sep 13 00:53:17.805885 systemd[1]: Reached target machines.target. Sep 13 00:53:17.808268 systemd[1]: Starting ldconfig.service... Sep 13 00:53:17.809336 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.809390 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:17.810663 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:53:17.812688 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:53:17.815058 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:53:17.817159 systemd[1]: Starting systemd-sysext.service... Sep 13 00:53:17.820017 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Sep 13 00:53:17.821015 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:53:17.825608 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:53:17.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.838960 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:53:17.842637 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:53:17.842833 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:53:17.853637 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:53:17.854153 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:53:17.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.864594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:53:17.866685 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) Sep 13 00:53:17.866685 systemd-fsck[1118]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:53:17.868982 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:53:17.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.879583 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:53:17.883902 (sd-sysext)[1128]: Using extensions 'kubernetes'. Sep 13 00:53:17.884246 (sd-sysext)[1128]: Merged extensions into '/usr'. Sep 13 00:53:17.900478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.901918 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:17.904088 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:17.906105 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:17.907204 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.907348 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:17.908143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:17.908292 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:17.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.909742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:17.909917 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:17.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.911502 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:17.911850 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:17.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:17.913399 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:17.913488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:17.954368 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:53:17.959624 systemd[1]: Finished ldconfig.service. Sep 13 00:53:17.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.010774 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:53:18.012516 systemd[1]: Mounting boot.mount... Sep 13 00:53:18.013188 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.014488 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:53:18.015229 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.019926 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:53:18.023245 systemd[1]: Finished systemd-sysext.service. Sep 13 00:53:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.024253 systemd[1]: Mounted boot.mount. Sep 13 00:53:18.026653 systemd[1]: Starting ensure-sysext.service... Sep 13 00:53:18.028480 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:53:18.034259 systemd[1]: Reloading. Sep 13 00:53:18.040008 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:53:18.041152 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:53:18.042672 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:53:18.087469 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-13T00:53:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:18.087498 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-13T00:53:18Z" level=info msg="torcx already run" Sep 13 00:53:18.161122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:18.161139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:18.179873 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:18.230314 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:53:18.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.232419 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:53:18.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.235443 systemd[1]: Starting audit-rules.service... Sep 13 00:53:18.237203 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:53:18.239527 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:53:18.241903 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:18.243941 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:53:18.245756 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:53:18.248153 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:53:18.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.249000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.254763 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.255010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.256763 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:18.258547 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:18.260357 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:18.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.261092 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.261189 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:18.261292 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:18.261357 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.262346 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:53:18.264043 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:53:18.265312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:18.265448 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:18.266809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:18.266943 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:18.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.268199 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:18.268359 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:18.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.270588 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:18.270691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.272054 systemd[1]: Starting systemd-update-done.service... Sep 13 00:53:18.274428 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.274642 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.275734 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:18.277961 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:18.279794 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:18.280957 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.281067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:18.281178 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:18.281258 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.282169 systemd[1]: Finished systemd-update-done.service. Sep 13 00:53:18.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.283911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:18.284031 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:18.285235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:18.285360 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:18.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.286868 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:18.290119 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:18.290272 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:18.294318 augenrules[1253]: No rules Sep 13 00:53:18.294376 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.294592 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.293000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:53:18.293000 audit[1253]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda64b9450 a2=420 a3=0 items=0 ppid=1215 pid=1253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:18.293000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:53:18.295677 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:18.297724 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:18.299736 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:18.301982 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.302084 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:18.303306 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:53:18.304766 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:18.304858 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:18.305962 systemd[1]: Finished audit-rules.service. Sep 13 00:53:18.307355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:18.307498 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:18.308905 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:18.309035 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:18.310401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:18.310669 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:18.312132 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:18.312260 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.313957 systemd[1]: Finished ensure-sysext.service. Sep 13 00:53:18.324936 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:53:18.326339 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:53:18.326356 systemd[1]: Reached target time-set.target. Sep 13 00:53:18.326703 systemd-timesyncd[1223]: Initial clock synchronization to Sat 2025-09-13 00:53:18.142570 UTC. Sep 13 00:53:18.330972 systemd-resolved[1220]: Positive Trust Anchors: Sep 13 00:53:18.330985 systemd-resolved[1220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:18.331019 systemd-resolved[1220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:18.337748 systemd-resolved[1220]: Defaulting to hostname 'linux'. Sep 13 00:53:18.339237 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:18.340169 systemd[1]: Reached target network.target. Sep 13 00:53:18.340958 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:18.341803 systemd[1]: Reached target sysinit.target. Sep 13 00:53:18.342659 systemd[1]: Started motdgen.path. Sep 13 00:53:18.343375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:53:18.344581 systemd[1]: Started logrotate.timer. Sep 13 00:53:18.345371 systemd[1]: Started mdadm.timer. Sep 13 00:53:18.346091 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:53:18.346947 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:53:18.347013 systemd[1]: Reached target paths.target. Sep 13 00:53:18.347771 systemd[1]: Reached target timers.target. Sep 13 00:53:18.348826 systemd[1]: Listening on dbus.socket. Sep 13 00:53:18.350655 systemd[1]: Starting docker.socket... Sep 13 00:53:18.352148 systemd[1]: Listening on sshd.socket. Sep 13 00:53:18.352987 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:18.353234 systemd[1]: Listening on docker.socket. Sep 13 00:53:18.354016 systemd[1]: Reached target sockets.target. Sep 13 00:53:18.354808 systemd[1]: Reached target basic.target. Sep 13 00:53:18.355692 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:53:18.355732 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.355749 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:18.356552 systemd[1]: Starting containerd.service... Sep 13 00:53:18.358094 systemd[1]: Starting dbus.service... Sep 13 00:53:18.359739 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:53:18.361835 systemd[1]: Starting extend-filesystems.service... Sep 13 00:53:18.362940 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:53:18.363870 systemd[1]: Starting motdgen.service... Sep 13 00:53:18.365391 jq[1275]: false Sep 13 00:53:18.365760 systemd[1]: Starting prepare-helm.service... Sep 13 00:53:18.367692 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:53:18.369793 systemd[1]: Starting sshd-keygen.service... Sep 13 00:53:18.372277 systemd[1]: Starting systemd-logind.service... Sep 13 00:53:18.376595 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:18.376661 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:53:18.377746 systemd[1]: Starting update-engine.service... Sep 13 00:53:18.379510 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:53:18.382654 jq[1297]: true Sep 13 00:53:18.382621 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:53:18.382867 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:53:18.383652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:53:18.384210 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:53:18.409481 extend-filesystems[1276]: Found loop1 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found sr0 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda1 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda2 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda3 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found usr Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda4 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda6 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda7 Sep 13 00:53:18.409481 extend-filesystems[1276]: Found vda9 Sep 13 00:53:18.409481 extend-filesystems[1276]: Checking size of /dev/vda9 Sep 13 00:53:18.437112 jq[1301]: true Sep 13 00:53:18.437200 tar[1300]: linux-amd64/helm Sep 13 00:53:18.390402 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:53:18.396067 dbus-daemon[1274]: [system] SELinux support is enabled Sep 13 00:53:18.437608 env[1303]: time="2025-09-13T00:53:18.429090873Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:53:18.437780 update_engine[1295]: I0913 00:53:18.425237 1295 main.cc:92] Flatcar Update Engine starting Sep 13 00:53:18.437780 update_engine[1295]: I0913 00:53:18.427062 1295 update_check_scheduler.cc:74] Next update check in 11m8s Sep 13 00:53:18.439881 extend-filesystems[1276]: Resized partition /dev/vda9 Sep 13 00:53:18.390615 systemd[1]: Finished motdgen.service. Sep 13 00:53:18.442235 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:53:18.444900 bash[1321]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:53:18.396241 systemd[1]: Started dbus.service. Sep 13 00:53:18.399050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:53:18.399072 systemd[1]: Reached target system-config.target. Sep 13 00:53:18.400390 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:53:18.400407 systemd[1]: Reached target user-config.target. Sep 13 00:53:18.425387 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:53:18.427424 systemd[1]: Started update-engine.service. Sep 13 00:53:18.432060 systemd[1]: Started locksmithd.service. Sep 13 00:53:18.446587 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:53:18.455282 env[1303]: time="2025-09-13T00:53:18.455237648Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:53:18.455686 env[1303]: time="2025-09-13T00:53:18.455669037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.457799 env[1303]: time="2025-09-13T00:53:18.457775437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:18.457902 env[1303]: time="2025-09-13T00:53:18.457879923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.458275 env[1303]: time="2025-09-13T00:53:18.458252481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:18.458393 env[1303]: time="2025-09-13T00:53:18.458371124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.458512 env[1303]: time="2025-09-13T00:53:18.458487642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:53:18.458646 env[1303]: time="2025-09-13T00:53:18.458623998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.458844 env[1303]: time="2025-09-13T00:53:18.458826177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.459166 env[1303]: time="2025-09-13T00:53:18.459147600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:18.459434 env[1303]: time="2025-09-13T00:53:18.459413438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:18.459548 env[1303]: time="2025-09-13T00:53:18.459525588Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:53:18.459824 env[1303]: time="2025-09-13T00:53:18.459794773Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:53:18.459922 env[1303]: time="2025-09-13T00:53:18.459900461Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:53:18.469876 systemd-logind[1289]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:53:18.469898 systemd-logind[1289]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:53:18.471121 systemd-logind[1289]: New seat seat0. Sep 13 00:53:18.475585 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:53:18.478614 systemd[1]: Started systemd-logind.service. Sep 13 00:53:18.496636 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:53:18.496636 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:53:18.496636 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:53:18.515021 extend-filesystems[1276]: Resized filesystem in /dev/vda9 Sep 13 00:53:18.498595 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498414796Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498802202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498818202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498861904Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498877052Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498888834Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498900586Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498912879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.498924832Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.507368183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.507389152Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.507411655Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.507528504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:53:18.516819 env[1303]: time="2025-09-13T00:53:18.507615196Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:53:18.498835 systemd[1]: Finished extend-filesystems.service. Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507873901Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507895662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507907705Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507951246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507962267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507973248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507983337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.507994397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508005448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508016118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508028682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508040103Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508136303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508149558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517160 env[1303]: time="2025-09-13T00:53:18.508159978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.509266 systemd[1]: Started containerd.service. Sep 13 00:53:18.517499 env[1303]: time="2025-09-13T00:53:18.508169806Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:53:18.517499 env[1303]: time="2025-09-13T00:53:18.508181618Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:53:18.517499 env[1303]: time="2025-09-13T00:53:18.508192188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:53:18.517499 env[1303]: time="2025-09-13T00:53:18.508217325Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:53:18.517499 env[1303]: time="2025-09-13T00:53:18.508248995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.508412301Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.508454430Z" level=info msg="Connect containerd service" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.508484667Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.508920995Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509112674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509142520Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509180331Z" level=info msg="containerd successfully booted in 0.081199s" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509551487Z" level=info msg="Start subscribing containerd event" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509671873Z" level=info msg="Start recovering state" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509744909Z" level=info msg="Start event monitor" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509760068Z" level=info msg="Start snapshots syncer" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509771279Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:53:18.517613 env[1303]: time="2025-09-13T00:53:18.509781729Z" level=info msg="Start streaming server" Sep 13 00:53:18.527962 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:53:18.807411 tar[1300]: linux-amd64/LICENSE Sep 13 00:53:18.807411 tar[1300]: linux-amd64/README.md Sep 13 00:53:18.811798 systemd[1]: Finished prepare-helm.service. Sep 13 00:53:18.845758 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:53:18.864441 systemd[1]: Finished sshd-keygen.service. Sep 13 00:53:18.867081 systemd[1]: Starting issuegen.service... Sep 13 00:53:18.872551 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:53:18.872775 systemd[1]: Finished issuegen.service. Sep 13 00:53:18.874945 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:53:18.880844 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:53:18.883648 systemd[1]: Started getty@tty1.service. Sep 13 00:53:18.885959 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:53:18.887116 systemd[1]: Reached target getty.target. Sep 13 00:53:19.533710 systemd-networkd[1078]: eth0: Gained IPv6LL Sep 13 00:53:19.535814 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:53:19.537358 systemd[1]: Reached target network-online.target. Sep 13 00:53:19.540015 systemd[1]: Starting kubelet.service... Sep 13 00:53:20.868599 systemd[1]: Started kubelet.service. Sep 13 00:53:20.870026 systemd[1]: Reached target multi-user.target. Sep 13 00:53:20.872330 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:53:20.878737 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:53:20.878961 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:53:20.882791 systemd[1]: Startup finished in 5.291s (kernel) + 6.500s (userspace) = 11.792s. Sep 13 00:53:21.518456 kubelet[1377]: E0913 00:53:21.518360 1377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:21.520583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:21.520783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:22.537769 systemd[1]: Created slice system-sshd.slice. Sep 13 00:53:22.538933 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:57036.service. Sep 13 00:53:22.580654 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 57036 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:22.582061 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:22.590008 systemd-logind[1289]: New session 1 of user core. Sep 13 00:53:22.590798 systemd[1]: Created slice user-500.slice. Sep 13 00:53:22.591694 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:53:22.599731 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:53:22.600813 systemd[1]: Starting user@500.service... Sep 13 00:53:22.603847 (systemd)[1392]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:22.670496 systemd[1392]: Queued start job for default target default.target. Sep 13 00:53:22.670706 systemd[1392]: Reached target paths.target. Sep 13 00:53:22.670721 systemd[1392]: Reached target sockets.target. Sep 13 00:53:22.670741 systemd[1392]: Reached target timers.target. Sep 13 00:53:22.670752 systemd[1392]: Reached target basic.target. Sep 13 00:53:22.670788 systemd[1392]: Reached target default.target. Sep 13 00:53:22.670809 systemd[1392]: Startup finished in 61ms. Sep 13 00:53:22.670894 systemd[1]: Started user@500.service. Sep 13 00:53:22.671876 systemd[1]: Started session-1.scope. Sep 13 00:53:22.721194 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:57048.service. Sep 13 00:53:22.759811 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:22.760931 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:22.764819 systemd-logind[1289]: New session 2 of user core. Sep 13 00:53:22.765944 systemd[1]: Started session-2.scope. Sep 13 00:53:22.818683 sshd[1401]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:22.821012 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:57052.service. Sep 13 00:53:22.821493 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:57048.service: Deactivated successfully. Sep 13 00:53:22.822403 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:53:22.822505 systemd-logind[1289]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:53:22.823452 systemd-logind[1289]: Removed session 2. Sep 13 00:53:22.860480 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 57052 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:22.861517 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:22.864673 systemd-logind[1289]: New session 3 of user core. Sep 13 00:53:22.865317 systemd[1]: Started session-3.scope. Sep 13 00:53:22.914544 sshd[1406]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:22.916640 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:57054.service. Sep 13 00:53:22.917195 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:57052.service: Deactivated successfully. Sep 13 00:53:22.917925 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:53:22.918512 systemd-logind[1289]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:53:22.919296 systemd-logind[1289]: Removed session 3. Sep 13 00:53:22.956513 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:22.957484 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:22.960708 systemd-logind[1289]: New session 4 of user core. Sep 13 00:53:22.961330 systemd[1]: Started session-4.scope. Sep 13 00:53:23.013758 sshd[1413]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:23.015663 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:57066.service. Sep 13 00:53:23.016374 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:57054.service: Deactivated successfully. Sep 13 00:53:23.017643 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:53:23.017651 systemd-logind[1289]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:53:23.018672 systemd-logind[1289]: Removed session 4. Sep 13 00:53:23.054877 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 57066 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:23.055862 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:23.058725 systemd-logind[1289]: New session 5 of user core. Sep 13 00:53:23.059394 systemd[1]: Started session-5.scope. Sep 13 00:53:23.112392 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:53:23.112613 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:23.121780 dbus-daemon[1274]: Ѝ\xaf\xe0:V: received setenforce notice (enforcing=325928352) Sep 13 00:53:23.123867 sudo[1426]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:23.125155 sshd[1420]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:23.127450 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:57074.service. Sep 13 00:53:23.128168 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:57066.service: Deactivated successfully. Sep 13 00:53:23.128967 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:53:23.128990 systemd-logind[1289]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:53:23.129814 systemd-logind[1289]: Removed session 5. Sep 13 00:53:23.165751 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 57074 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:23.166617 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:23.169541 systemd-logind[1289]: New session 6 of user core. Sep 13 00:53:23.170417 systemd[1]: Started session-6.scope. Sep 13 00:53:23.222385 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:53:23.222621 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:23.225055 sudo[1435]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:23.228756 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:53:23.228939 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:23.236475 systemd[1]: Stopping audit-rules.service... Sep 13 00:53:23.236000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:53:23.237602 auditctl[1438]: No rules Sep 13 00:53:23.237833 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:53:23.237996 systemd[1]: Stopped audit-rules.service. Sep 13 00:53:23.238515 kernel: kauditd_printk_skb: 191 callbacks suppressed Sep 13 00:53:23.238586 kernel: audit: type=1305 audit(1757724803.236:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:53:23.239179 systemd[1]: Starting audit-rules.service... Sep 13 00:53:23.236000 audit[1438]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff28960970 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:23.247573 kernel: audit: type=1300 audit(1757724803.236:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff28960970 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:23.247620 kernel: audit: type=1327 audit(1757724803.236:152): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:53:23.247651 kernel: audit: type=1131 audit(1757724803.236:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.236000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:53:23.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.255602 augenrules[1456]: No rules Sep 13 00:53:23.256234 systemd[1]: Finished audit-rules.service. Sep 13 00:53:23.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.257075 sudo[1434]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:23.258185 sshd[1428]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:23.254000 audit[1434]: USER_END pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.260068 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:57074.service: Deactivated successfully. Sep 13 00:53:23.261863 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:57086.service. Sep 13 00:53:23.262161 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:53:23.264598 kernel: audit: type=1130 audit(1757724803.254:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.264641 kernel: audit: type=1106 audit(1757724803.254:155): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.264669 kernel: audit: type=1104 audit(1757724803.254:156): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.254000 audit[1434]: CRED_DISP pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.263073 systemd-logind[1289]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:53:23.264011 systemd-logind[1289]: Removed session 6. Sep 13 00:53:23.254000 audit[1428]: USER_END pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.272544 kernel: audit: type=1106 audit(1757724803.254:157): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.272653 kernel: audit: type=1104 audit(1757724803.254:158): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.254000 audit[1428]: CRED_DISP pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.131:22-10.0.0.1:57074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.279651 kernel: audit: type=1131 audit(1757724803.259:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.131:22-10.0.0.1:57074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:57086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.302000 audit[1463]: USER_ACCT pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.303445 sshd[1463]: Accepted publickey for core from 10.0.0.1 port 57086 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:23.303000 audit[1463]: CRED_ACQ pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.303000 audit[1463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccbdb7540 a2=3 a3=0 items=0 ppid=1 pid=1463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:23.303000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:53:23.304680 sshd[1463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:23.307935 systemd-logind[1289]: New session 7 of user core. Sep 13 00:53:23.308753 systemd[1]: Started session-7.scope. Sep 13 00:53:23.311000 audit[1463]: USER_START pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.313000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:53:23.358000 audit[1467]: USER_ACCT pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.359842 sudo[1467]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:53:23.358000 audit[1467]: CRED_REFR pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.360114 sudo[1467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:23.361000 audit[1467]: USER_START pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.385215 systemd[1]: Starting docker.service... Sep 13 00:53:23.431190 env[1478]: time="2025-09-13T00:53:23.431135141Z" level=info msg="Starting up" Sep 13 00:53:23.432665 env[1478]: time="2025-09-13T00:53:23.432628415Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:23.432665 env[1478]: time="2025-09-13T00:53:23.432659391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:23.432742 env[1478]: time="2025-09-13T00:53:23.432680157Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:23.432742 env[1478]: time="2025-09-13T00:53:23.432689497Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:23.433969 env[1478]: time="2025-09-13T00:53:23.433937155Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:23.433969 env[1478]: time="2025-09-13T00:53:23.433957842Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:23.434034 env[1478]: time="2025-09-13T00:53:23.433972352Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:23.434034 env[1478]: time="2025-09-13T00:53:23.433981978Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:23.966876 env[1478]: time="2025-09-13T00:53:23.966826361Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:53:23.966876 env[1478]: time="2025-09-13T00:53:23.966851071Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:53:23.967110 env[1478]: time="2025-09-13T00:53:23.967040575Z" level=info msg="Loading containers: start." Sep 13 00:53:24.024000 audit[1512]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.024000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd7052f270 a2=0 a3=7ffd7052f25c items=0 ppid=1478 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.024000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:53:24.025000 audit[1514]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.025000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcad9ef180 a2=0 a3=7ffcad9ef16c items=0 ppid=1478 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.025000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:53:24.027000 audit[1516]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.027000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff806ee160 a2=0 a3=7fff806ee14c items=0 ppid=1478 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.027000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:24.028000 audit[1518]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.028000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfa567db0 a2=0 a3=7ffcfa567d9c items=0 ppid=1478 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.028000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:24.030000 audit[1520]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.030000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2dad65f0 a2=0 a3=7ffd2dad65dc items=0 ppid=1478 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.030000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:53:24.048000 audit[1525]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.048000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde1361b20 a2=0 a3=7ffde1361b0c items=0 ppid=1478 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.048000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:53:24.152000 audit[1527]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.152000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe79f3e90 a2=0 a3=7fffe79f3e7c items=0 ppid=1478 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:53:24.154000 audit[1529]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.154000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea5ab01e0 a2=0 a3=7ffea5ab01cc items=0 ppid=1478 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.154000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:53:24.156000 audit[1531]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.156000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff90a407a0 a2=0 a3=7fff90a4078c items=0 ppid=1478 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.156000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:24.259000 audit[1535]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.259000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffe1454140 a2=0 a3=7fffe145412c items=0 ppid=1478 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.259000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:24.264000 audit[1536]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.264000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe26b3b950 a2=0 a3=7ffe26b3b93c items=0 ppid=1478 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.264000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:24.274586 kernel: Initializing XFRM netlink socket Sep 13 00:53:24.303123 env[1478]: time="2025-09-13T00:53:24.303077069Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:53:24.319000 audit[1544]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.319000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff673958d0 a2=0 a3=7fff673958bc items=0 ppid=1478 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.319000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:53:24.334000 audit[1547]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.334000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe0edf9f80 a2=0 a3=7ffe0edf9f6c items=0 ppid=1478 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.334000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:53:24.336000 audit[1550]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.336000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe057ea7c0 a2=0 a3=7ffe057ea7ac items=0 ppid=1478 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:24.338000 audit[1552]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.338000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdf96ffa30 a2=0 a3=7ffdf96ffa1c items=0 ppid=1478 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:24.340000 audit[1554]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.340000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe34c7bc40 a2=0 a3=7ffe34c7bc2c items=0 ppid=1478 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.340000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:53:24.342000 audit[1556]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.342000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcffce27c0 a2=0 a3=7ffcffce27ac items=0 ppid=1478 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:53:24.344000 audit[1558]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.344000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff39cd6250 a2=0 a3=7fff39cd623c items=0 ppid=1478 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:53:24.350000 audit[1561]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.350000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd894a30e0 a2=0 a3=7ffd894a30cc items=0 ppid=1478 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.350000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:53:24.352000 audit[1563]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.352000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeae3c9130 a2=0 a3=7ffeae3c911c items=0 ppid=1478 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.352000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:24.354000 audit[1565]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.354000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe2dd968c0 a2=0 a3=7ffe2dd968ac items=0 ppid=1478 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.354000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:24.357000 audit[1567]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.357000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd8cc0de10 a2=0 a3=7ffd8cc0ddfc items=0 ppid=1478 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.357000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:53:24.357987 systemd-networkd[1078]: docker0: Link UP Sep 13 00:53:24.509000 audit[1571]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.509000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebe414680 a2=0 a3=7ffebe41466c items=0 ppid=1478 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:24.518000 audit[1572]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:24.518000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff959cb510 a2=0 a3=7fff959cb4fc items=0 ppid=1478 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:24.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:24.520536 env[1478]: time="2025-09-13T00:53:24.520494985Z" level=info msg="Loading containers: done." Sep 13 00:53:24.549945 env[1478]: time="2025-09-13T00:53:24.549889087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:53:24.550140 env[1478]: time="2025-09-13T00:53:24.550103577Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:53:24.550240 env[1478]: time="2025-09-13T00:53:24.550215624Z" level=info msg="Daemon has completed initialization" Sep 13 00:53:24.576495 systemd[1]: Started docker.service. Sep 13 00:53:24.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.582664 env[1478]: time="2025-09-13T00:53:24.582600071Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:53:25.597810 env[1303]: time="2025-09-13T00:53:25.597460327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:53:26.234293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994209035.mount: Deactivated successfully. Sep 13 00:53:28.101857 env[1303]: time="2025-09-13T00:53:28.101783941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:28.103769 env[1303]: time="2025-09-13T00:53:28.103704584Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:28.105611 env[1303]: time="2025-09-13T00:53:28.105551711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:28.107395 env[1303]: time="2025-09-13T00:53:28.107365505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:28.108106 env[1303]: time="2025-09-13T00:53:28.108059123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:53:28.108694 env[1303]: time="2025-09-13T00:53:28.108665483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:53:30.113826 env[1303]: time="2025-09-13T00:53:30.113756863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.115585 env[1303]: time="2025-09-13T00:53:30.115530451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.117348 env[1303]: time="2025-09-13T00:53:30.117300083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.119182 env[1303]: time="2025-09-13T00:53:30.119148256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:30.120076 env[1303]: time="2025-09-13T00:53:30.120033949Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:53:30.120630 env[1303]: time="2025-09-13T00:53:30.120599002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:53:31.642665 env[1303]: time="2025-09-13T00:53:31.642603232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:31.644522 env[1303]: time="2025-09-13T00:53:31.644462127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:31.646297 env[1303]: time="2025-09-13T00:53:31.646259470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:31.648149 env[1303]: time="2025-09-13T00:53:31.648123311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:31.649156 env[1303]: time="2025-09-13T00:53:31.649107801Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:53:31.649687 env[1303]: time="2025-09-13T00:53:31.649648526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:53:31.653952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:53:31.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.654147 systemd[1]: Stopped kubelet.service. Sep 13 00:53:31.655095 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 13 00:53:31.655152 kernel: audit: type=1130 audit(1757724811.653:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.655641 systemd[1]: Starting kubelet.service... Sep 13 00:53:31.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.661109 kernel: audit: type=1131 audit(1757724811.653:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.874895 systemd[1]: Started kubelet.service. Sep 13 00:53:31.878590 kernel: audit: type=1130 audit(1757724811.873:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.558298 kubelet[1617]: E0913 00:53:32.558206 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:32.560997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:32.561143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:32.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:32.564581 kernel: audit: type=1131 audit(1757724812.560:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:33.704440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836999790.mount: Deactivated successfully. Sep 13 00:53:34.308249 env[1303]: time="2025-09-13T00:53:34.308198024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.311448 env[1303]: time="2025-09-13T00:53:34.311409632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.313080 env[1303]: time="2025-09-13T00:53:34.313040440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.314510 env[1303]: time="2025-09-13T00:53:34.314484619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.314920 env[1303]: time="2025-09-13T00:53:34.314879533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:53:34.315424 env[1303]: time="2025-09-13T00:53:34.315391036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:53:34.778508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170548658.mount: Deactivated successfully. Sep 13 00:53:36.891543 env[1303]: time="2025-09-13T00:53:36.891463613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:36.894306 env[1303]: time="2025-09-13T00:53:36.894264357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:36.896429 env[1303]: time="2025-09-13T00:53:36.896387218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:36.898488 env[1303]: time="2025-09-13T00:53:36.898443204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:36.899203 env[1303]: time="2025-09-13T00:53:36.899150241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:53:36.900102 env[1303]: time="2025-09-13T00:53:36.900071712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:53:37.455387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227992022.mount: Deactivated successfully. Sep 13 00:53:37.461225 env[1303]: time="2025-09-13T00:53:37.461155356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:37.462895 env[1303]: time="2025-09-13T00:53:37.462842777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:37.464325 env[1303]: time="2025-09-13T00:53:37.464265630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:37.465623 env[1303]: time="2025-09-13T00:53:37.465591113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:37.466023 env[1303]: time="2025-09-13T00:53:37.465982685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:53:37.466584 env[1303]: time="2025-09-13T00:53:37.466534084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:53:37.980335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879338684.mount: Deactivated successfully. Sep 13 00:53:42.070298 env[1303]: time="2025-09-13T00:53:42.070233765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.175338 env[1303]: time="2025-09-13T00:53:42.175275004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.204388 env[1303]: time="2025-09-13T00:53:42.204339971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.206997 env[1303]: time="2025-09-13T00:53:42.206966261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.207899 env[1303]: time="2025-09-13T00:53:42.207861287Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:53:42.648361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:53:42.648642 systemd[1]: Stopped kubelet.service. Sep 13 00:53:42.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.650586 systemd[1]: Starting kubelet.service... Sep 13 00:53:42.655888 kernel: audit: type=1130 audit(1757724822.647:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.656007 kernel: audit: type=1131 audit(1757724822.647:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.753450 systemd[1]: Started kubelet.service. Sep 13 00:53:42.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.763589 kernel: audit: type=1130 audit(1757724822.752:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:42.832925 kubelet[1651]: E0913 00:53:42.832888 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:42.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:42.837469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:42.837789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:42.841722 kernel: audit: type=1131 audit(1757724822.836:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:44.625204 systemd[1]: Stopped kubelet.service. Sep 13 00:53:44.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:44.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:44.628302 systemd[1]: Starting kubelet.service... Sep 13 00:53:44.631378 kernel: audit: type=1130 audit(1757724824.623:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:44.631444 kernel: audit: type=1131 audit(1757724824.623:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:44.655869 systemd[1]: Reloading. Sep 13 00:53:44.725143 /usr/lib/systemd/system-generators/torcx-generator[1692]: time="2025-09-13T00:53:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:44.725169 /usr/lib/systemd/system-generators/torcx-generator[1692]: time="2025-09-13T00:53:44Z" level=info msg="torcx already run" Sep 13 00:53:45.397807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:45.397824 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:45.416616 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:45.482957 systemd[1]: Started kubelet.service. Sep 13 00:53:45.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.486717 kernel: audit: type=1130 audit(1757724825.482:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.489255 systemd[1]: Stopping kubelet.service... Sep 13 00:53:45.490030 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:45.490277 systemd[1]: Stopped kubelet.service. Sep 13 00:53:45.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.492219 systemd[1]: Starting kubelet.service... Sep 13 00:53:45.494593 kernel: audit: type=1131 audit(1757724825.489:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.580644 systemd[1]: Started kubelet.service. Sep 13 00:53:45.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.587587 kernel: audit: type=1130 audit(1757724825.582:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:45.619773 kubelet[1757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:45.619773 kubelet[1757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:45.619773 kubelet[1757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:45.620124 kubelet[1757]: I0913 00:53:45.619881 1757 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:45.881307 kubelet[1757]: I0913 00:53:45.881255 1757 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:45.881307 kubelet[1757]: I0913 00:53:45.881289 1757 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:45.881593 kubelet[1757]: I0913 00:53:45.881555 1757 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:45.902770 kubelet[1757]: I0913 00:53:45.902725 1757 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:45.903486 kubelet[1757]: E0913 00:53:45.903450 1757 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:45.909370 kubelet[1757]: E0913 00:53:45.909343 1757 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:45.909370 kubelet[1757]: I0913 00:53:45.909369 1757 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:45.914425 kubelet[1757]: I0913 00:53:45.914402 1757 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:45.914694 kubelet[1757]: I0913 00:53:45.914673 1757 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:45.914810 kubelet[1757]: I0913 00:53:45.914783 1757 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:45.914996 kubelet[1757]: I0913 00:53:45.914809 1757 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:53:45.915090 kubelet[1757]: I0913 00:53:45.915008 1757 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:45.915090 kubelet[1757]: I0913 00:53:45.915017 1757 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:45.915138 kubelet[1757]: I0913 00:53:45.915130 1757 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:45.920196 kubelet[1757]: I0913 00:53:45.920172 1757 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:45.920196 kubelet[1757]: I0913 00:53:45.920194 1757 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:45.920270 kubelet[1757]: I0913 00:53:45.920234 1757 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:45.920270 kubelet[1757]: I0913 00:53:45.920258 1757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:45.934269 kubelet[1757]: W0913 00:53:45.934180 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:45.934406 kubelet[1757]: E0913 00:53:45.934270 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:45.934406 kubelet[1757]: W0913 00:53:45.934181 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:45.934406 kubelet[1757]: E0913 00:53:45.934322 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:45.935588 kubelet[1757]: I0913 00:53:45.935554 1757 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:45.935971 kubelet[1757]: I0913 00:53:45.935939 1757 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:45.936023 kubelet[1757]: W0913 00:53:45.936013 1757 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:53:45.937656 kubelet[1757]: I0913 00:53:45.937619 1757 server.go:1274] "Started kubelet" Sep 13 00:53:45.937891 kubelet[1757]: I0913 00:53:45.937818 1757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:45.938408 kubelet[1757]: I0913 00:53:45.938258 1757 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:45.937000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:45.937000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:45.937000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ca4a50 a1=c000376f78 a2=c000ca4a20 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:45.937000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:45.937000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:45.937000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000413c60 a1=c000376f90 a2=c000ca4ae0 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:45.940000 audit[1770]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.942581 kernel: audit: type=1400 audit(1757724825.937:207): avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:45.940000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd470c3cf0 a2=0 a3=7ffd470c3cdc items=0 ppid=1757 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:45.940000 audit[1771]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.940000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3478e4d0 a2=0 a3=7ffe3478e4bc items=0 ppid=1757 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.938763 1757 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.938793 1757 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.938857 1757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.939814 1757 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.940733 1757 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:45.942791 kubelet[1757]: I0913 00:53:45.941002 1757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:45.944591 kubelet[1757]: E0913 00:53:45.944545 1757 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:45.945312 kubelet[1757]: I0913 00:53:45.944799 1757 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:45.945312 kubelet[1757]: I0913 00:53:45.945198 1757 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:45.945312 kubelet[1757]: E0913 00:53:45.945186 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Sep 13 00:53:45.945414 kubelet[1757]: I0913 00:53:45.945361 1757 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:45.946148 kubelet[1757]: W0913 00:53:45.945897 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:45.946148 kubelet[1757]: E0913 00:53:45.945958 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:45.946266 kubelet[1757]: I0913 00:53:45.946242 1757 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:45.946378 kubelet[1757]: I0913 00:53:45.946352 1757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:45.946000 audit[1773]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.946000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1c1c3510 a2=0 a3=7ffd1c1c34fc items=0 ppid=1757 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:45.948689 kubelet[1757]: I0913 00:53:45.948416 1757 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:45.948000 audit[1775]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.948000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdaf4d2c40 a2=0 a3=7ffdaf4d2c2c items=0 ppid=1757 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:45.950798 kubelet[1757]: E0913 00:53:45.949653 1757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b16a96e52c70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:53:45.937587312 +0000 UTC m=+0.352966214,LastTimestamp:2025-09-13 00:53:45.937587312 +0000 UTC m=+0.352966214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:53:45.951063 kubelet[1757]: E0913 00:53:45.951048 1757 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:45.954000 audit[1780]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.954000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd6033c100 a2=0 a3=7ffd6033c0ec items=0 ppid=1757 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:53:45.956771 kubelet[1757]: I0913 00:53:45.956704 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:45.955000 audit[1782]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.955000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea911a410 a2=0 a3=7ffea911a3fc items=0 ppid=1757 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:45.957806 kubelet[1757]: I0913 00:53:45.957792 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:45.957841 kubelet[1757]: I0913 00:53:45.957828 1757 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:45.957864 kubelet[1757]: I0913 00:53:45.957853 1757 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:45.957908 kubelet[1757]: E0913 00:53:45.957889 1757 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:45.956000 audit[1783]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.956000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda0f66960 a2=0 a3=7ffda0f6694c items=0 ppid=1757 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:45.957000 audit[1784]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.957000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8f7d7b10 a2=0 a3=7ffc8f7d7afc items=0 ppid=1757 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:45.959001 kubelet[1757]: W0913 00:53:45.958943 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:45.959061 kubelet[1757]: E0913 00:53:45.959017 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:45.958000 audit[1786]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.958000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffee0f49830 a2=0 a3=7ffee0f4981c items=0 ppid=1757 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:45.959000 audit[1787]: NETFILTER_CFG table=filter:35 family=10 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.959000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffddd919cc0 a2=0 a3=7ffddd919cac items=0 ppid=1757 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:45.960000 audit[1785]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.960000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc923d8da0 a2=0 a3=7ffc923d8d8c items=0 ppid=1757 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:45.961000 audit[1788]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.961000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb8e1bdf0 a2=0 a3=7ffcb8e1bddc items=0 ppid=1757 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:45.967770 kubelet[1757]: I0913 00:53:45.967752 1757 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:45.967770 kubelet[1757]: I0913 00:53:45.967767 1757 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:45.967846 kubelet[1757]: I0913 00:53:45.967782 1757 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:46.044939 kubelet[1757]: E0913 00:53:46.044917 1757 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:46.058292 kubelet[1757]: E0913 00:53:46.058260 1757 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:46.145336 kubelet[1757]: E0913 00:53:46.145242 1757 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:46.145731 kubelet[1757]: E0913 00:53:46.145605 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Sep 13 00:53:46.245968 kubelet[1757]: E0913 00:53:46.245867 1757 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:46.259300 kubelet[1757]: E0913 00:53:46.259248 1757 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:46.262031 kubelet[1757]: I0913 00:53:46.261999 1757 policy_none.go:49] "None policy: Start" Sep 13 00:53:46.262977 kubelet[1757]: I0913 00:53:46.262959 1757 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:46.263025 kubelet[1757]: I0913 00:53:46.262986 1757 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:46.268499 kubelet[1757]: I0913 00:53:46.268477 1757 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:46.266000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:46.266000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:46.266000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00077adb0 a1=c0007233f8 a2=c00077ad80 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:46.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:46.268734 kubelet[1757]: I0913 00:53:46.268547 1757 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:53:46.268734 kubelet[1757]: I0913 00:53:46.268668 1757 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:46.268734 kubelet[1757]: I0913 00:53:46.268683 1757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:46.268941 kubelet[1757]: I0913 00:53:46.268924 1757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:46.270116 kubelet[1757]: E0913 00:53:46.270091 1757 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:53:46.370374 kubelet[1757]: I0913 00:53:46.370354 1757 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:46.370739 kubelet[1757]: E0913 00:53:46.370716 1757 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 13 00:53:46.545866 kubelet[1757]: E0913 00:53:46.545808 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Sep 13 00:53:46.571995 kubelet[1757]: I0913 00:53:46.571940 1757 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:46.572448 kubelet[1757]: E0913 00:53:46.572413 1757 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 13 00:53:46.749974 kubelet[1757]: I0913 00:53:46.749885 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:46.749974 kubelet[1757]: I0913 00:53:46.749942 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:46.749974 kubelet[1757]: I0913 00:53:46.749970 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:46.749974 kubelet[1757]: I0913 00:53:46.749984 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:46.750422 kubelet[1757]: I0913 00:53:46.749998 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:46.750422 kubelet[1757]: I0913 00:53:46.750014 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:46.750422 kubelet[1757]: I0913 00:53:46.750035 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:46.750422 kubelet[1757]: I0913 00:53:46.750049 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:46.750422 kubelet[1757]: I0913 00:53:46.750123 1757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:46.841611 kubelet[1757]: W0913 00:53:46.841440 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:46.841611 kubelet[1757]: E0913 00:53:46.841525 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:46.902478 kubelet[1757]: W0913 00:53:46.902429 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:46.902556 kubelet[1757]: E0913 00:53:46.902481 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:46.965067 kubelet[1757]: E0913 00:53:46.965039 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:46.965504 kubelet[1757]: E0913 00:53:46.965480 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:46.965872 env[1303]: time="2025-09-13T00:53:46.965810285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3da8e40551f12363ddff532ba56535e,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:46.966454 kubelet[1757]: E0913 00:53:46.966438 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:46.966524 env[1303]: time="2025-09-13T00:53:46.966419267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:46.967096 env[1303]: time="2025-09-13T00:53:46.966954287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:46.974122 kubelet[1757]: I0913 00:53:46.974099 1757 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:46.974429 kubelet[1757]: E0913 00:53:46.974400 1757 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 13 00:53:47.285837 kubelet[1757]: W0913 00:53:47.285750 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:47.285837 kubelet[1757]: E0913 00:53:47.285837 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:47.346495 kubelet[1757]: E0913 00:53:47.346464 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Sep 13 00:53:47.536650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877519388.mount: Deactivated successfully. Sep 13 00:53:47.541585 env[1303]: time="2025-09-13T00:53:47.541527904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.542420 env[1303]: time="2025-09-13T00:53:47.542387887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.544142 env[1303]: time="2025-09-13T00:53:47.544101015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.546613 env[1303]: time="2025-09-13T00:53:47.546556631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.547790 env[1303]: time="2025-09-13T00:53:47.547760622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.548258 kubelet[1757]: W0913 00:53:47.548201 1757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 13 00:53:47.548326 kubelet[1757]: E0913 00:53:47.548272 1757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:47.552866 env[1303]: time="2025-09-13T00:53:47.552831013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.554969 env[1303]: time="2025-09-13T00:53:47.554936282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.555951 env[1303]: time="2025-09-13T00:53:47.555925368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.557511 env[1303]: time="2025-09-13T00:53:47.557488746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.558777 env[1303]: time="2025-09-13T00:53:47.558752166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.559322 env[1303]: time="2025-09-13T00:53:47.559301537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.559882 env[1303]: time="2025-09-13T00:53:47.559856114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.583204 env[1303]: time="2025-09-13T00:53:47.583117194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:47.583204 env[1303]: time="2025-09-13T00:53:47.583160411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:47.583410 env[1303]: time="2025-09-13T00:53:47.583188548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:47.583633 env[1303]: time="2025-09-13T00:53:47.583594458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f863fd11406d6bba095530b67a92935af91133f5c6b708c30489348c0bcfc422 pid=1803 runtime=io.containerd.runc.v2 Sep 13 00:53:47.588375 env[1303]: time="2025-09-13T00:53:47.588320492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:47.588488 env[1303]: time="2025-09-13T00:53:47.588359393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:47.588488 env[1303]: time="2025-09-13T00:53:47.588369467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:47.588646 env[1303]: time="2025-09-13T00:53:47.588571946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f5f3151a613566cd2109e63dd0631e28aa50ceabc281875834ba9de0f0a4960 pid=1823 runtime=io.containerd.runc.v2 Sep 13 00:53:47.588814 env[1303]: time="2025-09-13T00:53:47.588762459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:47.588900 env[1303]: time="2025-09-13T00:53:47.588789326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:47.588900 env[1303]: time="2025-09-13T00:53:47.588799198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:47.589037 env[1303]: time="2025-09-13T00:53:47.588885764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84884425e5dd94cdc53328208929c44afe0c411656c22b03f6dfa8e3a0aec8c2 pid=1838 runtime=io.containerd.runc.v2 Sep 13 00:53:47.632115 env[1303]: time="2025-09-13T00:53:47.632067110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f863fd11406d6bba095530b67a92935af91133f5c6b708c30489348c0bcfc422\"" Sep 13 00:53:47.633213 kubelet[1757]: E0913 00:53:47.633175 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.635157 env[1303]: time="2025-09-13T00:53:47.635125606Z" level=info msg="CreateContainer within sandbox \"f863fd11406d6bba095530b67a92935af91133f5c6b708c30489348c0bcfc422\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:53:47.646234 env[1303]: time="2025-09-13T00:53:47.644652974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3da8e40551f12363ddff532ba56535e,Namespace:kube-system,Attempt:0,} returns sandbox id \"84884425e5dd94cdc53328208929c44afe0c411656c22b03f6dfa8e3a0aec8c2\"" Sep 13 00:53:47.646234 env[1303]: time="2025-09-13T00:53:47.645703451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5f3151a613566cd2109e63dd0631e28aa50ceabc281875834ba9de0f0a4960\"" Sep 13 00:53:47.646671 kubelet[1757]: E0913 00:53:47.646470 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.646671 kubelet[1757]: E0913 00:53:47.646556 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.647801 env[1303]: time="2025-09-13T00:53:47.647771050Z" level=info msg="CreateContainer within sandbox \"8f5f3151a613566cd2109e63dd0631e28aa50ceabc281875834ba9de0f0a4960\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:53:47.648371 env[1303]: time="2025-09-13T00:53:47.648344462Z" level=info msg="CreateContainer within sandbox \"84884425e5dd94cdc53328208929c44afe0c411656c22b03f6dfa8e3a0aec8c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:53:47.656313 env[1303]: time="2025-09-13T00:53:47.656288305Z" level=info msg="CreateContainer within sandbox \"f863fd11406d6bba095530b67a92935af91133f5c6b708c30489348c0bcfc422\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c866aa0cfb820da7f2c4826bf6efd4b934cd4fec1257fde7f230030009979ad\"" Sep 13 00:53:47.656818 env[1303]: time="2025-09-13T00:53:47.656797912Z" level=info msg="StartContainer for \"9c866aa0cfb820da7f2c4826bf6efd4b934cd4fec1257fde7f230030009979ad\"" Sep 13 00:53:47.668347 env[1303]: time="2025-09-13T00:53:47.668283384Z" level=info msg="CreateContainer within sandbox \"8f5f3151a613566cd2109e63dd0631e28aa50ceabc281875834ba9de0f0a4960\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d39ccd4b8ded00e27a5e8ebcfb4c1f7734682558ad4e0f68eed5ce4bb2af53bc\"" Sep 13 00:53:47.668881 env[1303]: time="2025-09-13T00:53:47.668854023Z" level=info msg="StartContainer for \"d39ccd4b8ded00e27a5e8ebcfb4c1f7734682558ad4e0f68eed5ce4bb2af53bc\"" Sep 13 00:53:47.670307 env[1303]: time="2025-09-13T00:53:47.670230834Z" level=info msg="CreateContainer within sandbox \"84884425e5dd94cdc53328208929c44afe0c411656c22b03f6dfa8e3a0aec8c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d27ecac8f4cf8a5223ffbc9dff13ee7a761a967798b4d1230b0b714bbdd5c8d\"" Sep 13 00:53:47.670877 env[1303]: time="2025-09-13T00:53:47.670848905Z" level=info msg="StartContainer for \"6d27ecac8f4cf8a5223ffbc9dff13ee7a761a967798b4d1230b0b714bbdd5c8d\"" Sep 13 00:53:47.723727 env[1303]: time="2025-09-13T00:53:47.723654839Z" level=info msg="StartContainer for \"9c866aa0cfb820da7f2c4826bf6efd4b934cd4fec1257fde7f230030009979ad\" returns successfully" Sep 13 00:53:47.738088 env[1303]: time="2025-09-13T00:53:47.738021051Z" level=info msg="StartContainer for \"6d27ecac8f4cf8a5223ffbc9dff13ee7a761a967798b4d1230b0b714bbdd5c8d\" returns successfully" Sep 13 00:53:47.749649 env[1303]: time="2025-09-13T00:53:47.749605834Z" level=info msg="StartContainer for \"d39ccd4b8ded00e27a5e8ebcfb4c1f7734682558ad4e0f68eed5ce4bb2af53bc\" returns successfully" Sep 13 00:53:47.776176 kubelet[1757]: I0913 00:53:47.776140 1757 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:47.776629 kubelet[1757]: E0913 00:53:47.776503 1757 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 13 00:53:47.964640 kubelet[1757]: E0913 00:53:47.964599 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.966424 kubelet[1757]: E0913 00:53:47.966400 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:47.968315 kubelet[1757]: E0913 00:53:47.968290 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:48.952910 kubelet[1757]: E0913 00:53:48.952863 1757 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:53:48.970280 kubelet[1757]: E0913 00:53:48.970250 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:49.270474 kubelet[1757]: E0913 00:53:49.270254 1757 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:53:49.382315 kubelet[1757]: I0913 00:53:49.382237 1757 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:49.390109 kubelet[1757]: I0913 00:53:49.390089 1757 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:53:49.922770 kubelet[1757]: I0913 00:53:49.922629 1757 apiserver.go:52] "Watching apiserver" Sep 13 00:53:49.946004 kubelet[1757]: I0913 00:53:49.945972 1757 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:51.058248 kubelet[1757]: E0913 00:53:51.058193 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:51.525037 systemd[1]: Reloading. Sep 13 00:53:51.583852 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-09-13T00:53:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:51.583902 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-09-13T00:53:51Z" level=info msg="torcx already run" Sep 13 00:53:51.663144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:51.663162 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:51.682186 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:51.756223 systemd[1]: Stopping kubelet.service... Sep 13 00:53:51.781004 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:51.781399 systemd[1]: Stopped kubelet.service. Sep 13 00:53:51.784705 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 00:53:51.784778 kernel: audit: type=1131 audit(1757724831.780:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:51.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:51.783364 systemd[1]: Starting kubelet.service... Sep 13 00:53:51.953531 kernel: audit: type=1130 audit(1757724831.948:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:51.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:51.949193 systemd[1]: Started kubelet.service. Sep 13 00:53:51.991905 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:51.992284 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:51.992284 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:51.992497 kubelet[2119]: I0913 00:53:51.992353 2119 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:51.997931 kubelet[2119]: I0913 00:53:51.997900 2119 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:51.997931 kubelet[2119]: I0913 00:53:51.997926 2119 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:51.998164 kubelet[2119]: I0913 00:53:51.998149 2119 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:51.999413 kubelet[2119]: I0913 00:53:51.999395 2119 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:53:52.001136 kubelet[2119]: I0913 00:53:52.001098 2119 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:52.005614 kubelet[2119]: E0913 00:53:52.005581 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:52.005614 kubelet[2119]: I0913 00:53:52.005614 2119 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:52.009527 kubelet[2119]: I0913 00:53:52.009487 2119 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:52.010019 kubelet[2119]: I0913 00:53:52.009991 2119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:52.010164 kubelet[2119]: I0913 00:53:52.010132 2119 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:52.010333 kubelet[2119]: I0913 00:53:52.010155 2119 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:53:52.010416 kubelet[2119]: I0913 00:53:52.010348 2119 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:52.010416 kubelet[2119]: I0913 00:53:52.010356 2119 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:52.010416 kubelet[2119]: I0913 00:53:52.010390 2119 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:52.010502 kubelet[2119]: I0913 00:53:52.010488 2119 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:52.010528 kubelet[2119]: I0913 00:53:52.010503 2119 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:52.010581 kubelet[2119]: I0913 00:53:52.010536 2119 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:52.010581 kubelet[2119]: I0913 00:53:52.010548 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:52.011616 kubelet[2119]: I0913 00:53:52.011591 2119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:52.011967 kubelet[2119]: I0913 00:53:52.011943 2119 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:52.012369 kubelet[2119]: I0913 00:53:52.012330 2119 server.go:1274] "Started kubelet" Sep 13 00:53:52.015978 kubelet[2119]: I0913 00:53:52.015952 2119 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:53:52.016030 kubelet[2119]: I0913 00:53:52.015986 2119 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:53:52.016030 kubelet[2119]: I0913 00:53:52.016028 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:52.015000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:52.022113 kernel: audit: type=1400 audit(1757724832.015:224): avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:52.022162 kernel: audit: type=1401 audit(1757724832.015:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:52.022187 kernel: audit: type=1300 audit(1757724832.015:224): arch=c000003e syscall=188 success=no exit=-22 a0=c0008773b0 a1=c000b97170 a2=c000877380 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:52.015000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:52.015000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008773b0 a1=c000b97170 a2=c000877380 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:52.022347 kubelet[2119]: I0913 00:53:52.019863 2119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:52.022347 kubelet[2119]: I0913 00:53:52.020010 2119 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:52.022347 kubelet[2119]: E0913 00:53:52.020153 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:53:52.022347 kubelet[2119]: I0913 00:53:52.020871 2119 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:52.022347 kubelet[2119]: I0913 00:53:52.021147 2119 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:52.024174 kubelet[2119]: I0913 00:53:52.023178 2119 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:52.025377 kubelet[2119]: I0913 00:53:52.025185 2119 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:52.025377 kubelet[2119]: I0913 00:53:52.025340 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:52.031637 kernel: audit: type=1327 audit(1757724832.015:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:52.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:52.031906 kubelet[2119]: I0913 00:53:52.026889 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:52.031906 kubelet[2119]: I0913 00:53:52.027056 2119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:52.031906 kubelet[2119]: I0913 00:53:52.027258 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:52.031906 kubelet[2119]: I0913 00:53:52.028068 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:52.031906 kubelet[2119]: E0913 00:53:52.030442 2119 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:52.031906 kubelet[2119]: I0913 00:53:52.031186 2119 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:52.015000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:52.035891 kernel: audit: type=1400 audit(1757724832.015:225): avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:52.035948 kernel: audit: type=1401 audit(1757724832.015:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:52.015000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:52.037452 kubelet[2119]: I0913 00:53:52.037428 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:52.015000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000493b40 a1=c000b97188 a2=c000877440 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:52.037720 kubelet[2119]: I0913 00:53:52.037704 2119 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:52.038256 kubelet[2119]: I0913 00:53:52.038240 2119 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:52.038381 kubelet[2119]: E0913 00:53:52.038361 2119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:52.041769 kernel: audit: type=1300 audit(1757724832.015:225): arch=c000003e syscall=188 success=no exit=-22 a0=c000493b40 a1=c000b97188 a2=c000877440 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:52.041870 kernel: audit: type=1327 audit(1757724832.015:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:52.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:52.070503 kubelet[2119]: I0913 00:53:52.070477 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:52.070503 kubelet[2119]: I0913 00:53:52.070494 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:52.070503 kubelet[2119]: I0913 00:53:52.070510 2119 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:52.070731 kubelet[2119]: I0913 00:53:52.070711 2119 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:53:52.070771 kubelet[2119]: I0913 00:53:52.070729 2119 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:53:52.070771 kubelet[2119]: I0913 00:53:52.070751 2119 policy_none.go:49] "None policy: Start" Sep 13 00:53:52.071316 kubelet[2119]: I0913 00:53:52.071294 2119 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:52.071316 kubelet[2119]: I0913 00:53:52.071316 2119 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:52.071464 kubelet[2119]: I0913 00:53:52.071449 2119 state_mem.go:75] "Updated machine memory state" Sep 13 00:53:52.072584 kubelet[2119]: I0913 00:53:52.072543 2119 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:52.072000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:52.072000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:52.072000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001250270 a1=c00124c5a0 a2=c001250240 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:52.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:52.072938 kubelet[2119]: I0913 00:53:52.072702 2119 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:53:52.072938 kubelet[2119]: I0913 00:53:52.072842 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:52.072992 kubelet[2119]: I0913 00:53:52.072858 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:52.074163 kubelet[2119]: I0913 00:53:52.073133 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:52.146067 kubelet[2119]: E0913 00:53:52.146017 2119 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:52.176039 kubelet[2119]: I0913 00:53:52.176016 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:53:52.181740 kubelet[2119]: I0913 00:53:52.181713 2119 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:53:52.181801 kubelet[2119]: I0913 00:53:52.181795 2119 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:53:52.223667 kubelet[2119]: I0913 00:53:52.223627 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:52.223667 kubelet[2119]: I0913 00:53:52.223659 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:52.223874 kubelet[2119]: I0913 00:53:52.223683 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:52.223874 kubelet[2119]: I0913 00:53:52.223700 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:52.223874 kubelet[2119]: I0913 00:53:52.223714 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:52.223874 kubelet[2119]: I0913 00:53:52.223726 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3da8e40551f12363ddff532ba56535e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3da8e40551f12363ddff532ba56535e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:53:52.223874 kubelet[2119]: I0913 00:53:52.223740 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:52.224001 kubelet[2119]: I0913 00:53:52.223754 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:52.224001 kubelet[2119]: I0913 00:53:52.223789 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:53:52.445377 kubelet[2119]: E0913 00:53:52.445351 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:52.446762 kubelet[2119]: E0913 00:53:52.446728 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:52.446955 kubelet[2119]: E0913 00:53:52.446899 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:53.011442 kubelet[2119]: I0913 00:53:53.011399 2119 apiserver.go:52] "Watching apiserver" Sep 13 00:53:53.021401 kubelet[2119]: I0913 00:53:53.021367 2119 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:53.048981 kubelet[2119]: E0913 00:53:53.048967 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:53.049658 kubelet[2119]: E0913 00:53:53.049644 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:53.098885 kubelet[2119]: E0913 00:53:53.098693 2119 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:53:53.099315 kubelet[2119]: E0913 00:53:53.099299 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:53.106259 kubelet[2119]: I0913 00:53:53.106183 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.106160419 podStartE2EDuration="2.106160419s" podCreationTimestamp="2025-09-13 00:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:53.099052112 +0000 UTC m=+1.145591752" watchObservedRunningTime="2025-09-13 00:53:53.106160419 +0000 UTC m=+1.152700059" Sep 13 00:53:53.106461 kubelet[2119]: I0913 00:53:53.106278 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.106273563 podStartE2EDuration="1.106273563s" podCreationTimestamp="2025-09-13 00:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:53.104327167 +0000 UTC m=+1.150866807" watchObservedRunningTime="2025-09-13 00:53:53.106273563 +0000 UTC m=+1.152813203" Sep 13 00:53:53.119931 kubelet[2119]: I0913 00:53:53.119854 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.119836649 podStartE2EDuration="1.119836649s" podCreationTimestamp="2025-09-13 00:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:53.112649794 +0000 UTC m=+1.159189434" watchObservedRunningTime="2025-09-13 00:53:53.119836649 +0000 UTC m=+1.166376289" Sep 13 00:53:54.050059 kubelet[2119]: E0913 00:53:54.050024 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:54.050433 kubelet[2119]: E0913 00:53:54.050220 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:55.229193 kubelet[2119]: E0913 00:53:55.229135 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:55.955957 kubelet[2119]: I0913 00:53:55.955911 2119 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:53:55.956343 env[1303]: time="2025-09-13T00:53:55.956310413Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:53:55.956650 kubelet[2119]: I0913 00:53:55.956524 2119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:53:56.852689 kubelet[2119]: I0913 00:53:56.852643 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/045711c8-391b-42a2-b300-fe6b5d0be1bd-lib-modules\") pod \"kube-proxy-9l8sb\" (UID: \"045711c8-391b-42a2-b300-fe6b5d0be1bd\") " pod="kube-system/kube-proxy-9l8sb" Sep 13 00:53:56.852689 kubelet[2119]: I0913 00:53:56.852681 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/045711c8-391b-42a2-b300-fe6b5d0be1bd-xtables-lock\") pod \"kube-proxy-9l8sb\" (UID: \"045711c8-391b-42a2-b300-fe6b5d0be1bd\") " pod="kube-system/kube-proxy-9l8sb" Sep 13 00:53:56.852689 kubelet[2119]: I0913 00:53:56.852702 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/045711c8-391b-42a2-b300-fe6b5d0be1bd-kube-proxy\") pod \"kube-proxy-9l8sb\" (UID: \"045711c8-391b-42a2-b300-fe6b5d0be1bd\") " pod="kube-system/kube-proxy-9l8sb" Sep 13 00:53:56.852689 kubelet[2119]: I0913 00:53:56.852718 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfrl\" (UniqueName: \"kubernetes.io/projected/045711c8-391b-42a2-b300-fe6b5d0be1bd-kube-api-access-kdfrl\") pod \"kube-proxy-9l8sb\" (UID: \"045711c8-391b-42a2-b300-fe6b5d0be1bd\") " pod="kube-system/kube-proxy-9l8sb" Sep 13 00:53:57.176276 kubelet[2119]: E0913 00:53:57.176238 2119 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:53:57.176504 kubelet[2119]: E0913 00:53:57.176489 2119 projected.go:194] Error preparing data for projected volume kube-api-access-kdfrl for pod kube-system/kube-proxy-9l8sb: configmap "kube-root-ca.crt" not found Sep 13 00:53:57.176866 kubelet[2119]: E0913 00:53:57.176851 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/045711c8-391b-42a2-b300-fe6b5d0be1bd-kube-api-access-kdfrl podName:045711c8-391b-42a2-b300-fe6b5d0be1bd nodeName:}" failed. No retries permitted until 2025-09-13 00:53:57.676650317 +0000 UTC m=+5.723189957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kdfrl" (UniqueName: "kubernetes.io/projected/045711c8-391b-42a2-b300-fe6b5d0be1bd-kube-api-access-kdfrl") pod "kube-proxy-9l8sb" (UID: "045711c8-391b-42a2-b300-fe6b5d0be1bd") : configmap "kube-root-ca.crt" not found Sep 13 00:53:57.254468 kubelet[2119]: I0913 00:53:57.254424 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a90f97b1-e0d8-4f86-814c-3758e1ec9eb2-var-lib-calico\") pod \"tigera-operator-58fc44c59b-8vkqq\" (UID: \"a90f97b1-e0d8-4f86-814c-3758e1ec9eb2\") " pod="tigera-operator/tigera-operator-58fc44c59b-8vkqq" Sep 13 00:53:57.254468 kubelet[2119]: I0913 00:53:57.254467 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht6n\" (UniqueName: \"kubernetes.io/projected/a90f97b1-e0d8-4f86-814c-3758e1ec9eb2-kube-api-access-2ht6n\") pod \"tigera-operator-58fc44c59b-8vkqq\" (UID: \"a90f97b1-e0d8-4f86-814c-3758e1ec9eb2\") " pod="tigera-operator/tigera-operator-58fc44c59b-8vkqq" Sep 13 00:53:57.359962 kubelet[2119]: I0913 00:53:57.359925 2119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:53:57.517438 env[1303]: time="2025-09-13T00:53:57.517300671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-8vkqq,Uid:a90f97b1-e0d8-4f86-814c-3758e1ec9eb2,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:53:57.530163 env[1303]: time="2025-09-13T00:53:57.530097671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:57.530163 env[1303]: time="2025-09-13T00:53:57.530145903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:57.530291 env[1303]: time="2025-09-13T00:53:57.530160160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:57.530493 env[1303]: time="2025-09-13T00:53:57.530449888Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53 pid=2175 runtime=io.containerd.runc.v2 Sep 13 00:53:57.576075 env[1303]: time="2025-09-13T00:53:57.576022878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-8vkqq,Uid:a90f97b1-e0d8-4f86-814c-3758e1ec9eb2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53\"" Sep 13 00:53:57.577875 env[1303]: time="2025-09-13T00:53:57.577847576Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:53:58.003740 kubelet[2119]: E0913 00:53:58.003699 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:58.004189 env[1303]: time="2025-09-13T00:53:58.004140010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l8sb,Uid:045711c8-391b-42a2-b300-fe6b5d0be1bd,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:58.100821 env[1303]: time="2025-09-13T00:53:58.100746315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:58.100821 env[1303]: time="2025-09-13T00:53:58.100789126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:58.100821 env[1303]: time="2025-09-13T00:53:58.100803684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:58.101018 env[1303]: time="2025-09-13T00:53:58.100971460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10e92842f2c6b96197d494d0cb50eacf1bbdccf7afae3cef32a73e0adbcf0ddd pid=2216 runtime=io.containerd.runc.v2 Sep 13 00:53:58.129471 env[1303]: time="2025-09-13T00:53:58.129408801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l8sb,Uid:045711c8-391b-42a2-b300-fe6b5d0be1bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e92842f2c6b96197d494d0cb50eacf1bbdccf7afae3cef32a73e0adbcf0ddd\"" Sep 13 00:53:58.129964 kubelet[2119]: E0913 00:53:58.129930 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:58.132711 env[1303]: time="2025-09-13T00:53:58.132523524Z" level=info msg="CreateContainer within sandbox \"10e92842f2c6b96197d494d0cb50eacf1bbdccf7afae3cef32a73e0adbcf0ddd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:53:58.147318 env[1303]: time="2025-09-13T00:53:58.147266821Z" level=info msg="CreateContainer within sandbox \"10e92842f2c6b96197d494d0cb50eacf1bbdccf7afae3cef32a73e0adbcf0ddd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87bc45cc2d24a5191b40ed754b4c8763eb3d04210cbdcd2f864a7b9380d1ce80\"" Sep 13 00:53:58.147697 env[1303]: time="2025-09-13T00:53:58.147652048Z" level=info msg="StartContainer for \"87bc45cc2d24a5191b40ed754b4c8763eb3d04210cbdcd2f864a7b9380d1ce80\"" Sep 13 00:53:58.195412 env[1303]: time="2025-09-13T00:53:58.195363293Z" level=info msg="StartContainer for \"87bc45cc2d24a5191b40ed754b4c8763eb3d04210cbdcd2f864a7b9380d1ce80\" returns successfully" Sep 13 00:53:58.292597 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:53:58.292764 kernel: audit: type=1325 audit(1757724838.287:227): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.292813 kernel: audit: type=1325 audit(1757724838.287:228): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.287000 audit[2318]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.287000 audit[2317]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.287000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb548fee0 a2=0 a3=7ffdb548fecc items=0 ppid=2266 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.299188 kernel: audit: type=1300 audit(1757724838.287:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb548fee0 a2=0 a3=7ffdb548fecc items=0 ppid=2266 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.299240 kernel: audit: type=1327 audit(1757724838.287:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:58.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:58.289000 audit[2319]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.303502 kernel: audit: type=1325 audit(1757724838.289:229): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.303546 kernel: audit: type=1300 audit(1757724838.289:229): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8e9c6490 a2=0 a3=7ffe8e9c647c items=0 ppid=2266 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.289000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8e9c6490 a2=0 a3=7ffe8e9c647c items=0 ppid=2266 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:58.309990 kernel: audit: type=1327 audit(1757724838.289:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:58.310014 kernel: audit: type=1325 audit(1757724838.289:230): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.289000 audit[2320]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.289000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde7342710 a2=0 a3=7ffde73426fc items=0 ppid=2266 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.316623 kernel: audit: type=1300 audit(1757724838.289:230): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde7342710 a2=0 a3=7ffde73426fc items=0 ppid=2266 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.316658 kernel: audit: type=1327 audit(1757724838.289:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:58.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:58.287000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddc59af90 a2=0 a3=7ffddc59af7c items=0 ppid=2266 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:58.293000 audit[2321]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.293000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7622a340 a2=0 a3=7ffd7622a32c items=0 ppid=2266 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:58.293000 audit[2322]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.293000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe123a6c70 a2=0 a3=7ffe123a6c5c items=0 ppid=2266 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:58.389000 audit[2323]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.389000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe67a5fde0 a2=0 a3=7ffe67a5fdcc items=0 ppid=2266 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:53:58.392000 audit[2325]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.392000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff294ee390 a2=0 a3=7fff294ee37c items=0 ppid=2266 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:53:58.395000 audit[2328]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.395000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe05f716b0 a2=0 a3=7ffe05f7169c items=0 ppid=2266 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:53:58.396000 audit[2329]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.396000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb57d0370 a2=0 a3=7ffcb57d035c items=0 ppid=2266 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:53:58.398000 audit[2331]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.398000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff48400820 a2=0 a3=7fff4840080c items=0 ppid=2266 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:53:58.400000 audit[2332]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.400000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd162018e0 a2=0 a3=7ffd162018cc items=0 ppid=2266 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:53:58.403000 audit[2334]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.403000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcda838b00 a2=0 a3=7ffcda838aec items=0 ppid=2266 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:53:58.406000 audit[2337]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.406000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe73084aa0 a2=0 a3=7ffe73084a8c items=0 ppid=2266 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:53:58.407000 audit[2338]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.407000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd26340750 a2=0 a3=7ffd2634073c items=0 ppid=2266 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:53:58.410000 audit[2340]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.410000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd88c2270 a2=0 a3=7fffd88c225c items=0 ppid=2266 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:53:58.410000 audit[2341]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.410000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5cc848b0 a2=0 a3=7fff5cc8489c items=0 ppid=2266 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:53:58.413000 audit[2343]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.413000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefb2d11a0 a2=0 a3=7ffefb2d118c items=0 ppid=2266 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:58.416000 audit[2346]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.416000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb5279160 a2=0 a3=7ffcb527914c items=0 ppid=2266 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:58.419000 audit[2349]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.419000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffdb563b0 a2=0 a3=7ffffdb5639c items=0 ppid=2266 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:53:58.420000 audit[2350]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.420000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb0369460 a2=0 a3=7ffeb036944c items=0 ppid=2266 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:53:58.422000 audit[2352]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.422000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc58d0ab10 a2=0 a3=7ffc58d0aafc items=0 ppid=2266 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:58.425000 audit[2355]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.425000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe23faf640 a2=0 a3=7ffe23faf62c items=0 ppid=2266 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:58.426000 audit[2356]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.426000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff519b6a60 a2=0 a3=7fff519b6a4c items=0 ppid=2266 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:53:58.428000 audit[2358]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:58.428000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffea4e00a70 a2=0 a3=7ffea4e00a5c items=0 ppid=2266 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:53:58.432428 kubelet[2119]: E0913 00:53:58.432369 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:58.451000 audit[2364]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:58.451000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffebca29df0 a2=0 a3=7ffebca29ddc items=0 ppid=2266 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:58.460000 audit[2364]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:58.460000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffebca29df0 a2=0 a3=7ffebca29ddc items=0 ppid=2266 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:58.462000 audit[2369]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.462000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd99729e80 a2=0 a3=7ffd99729e6c items=0 ppid=2266 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:53:58.464000 audit[2371]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.464000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff608af160 a2=0 a3=7fff608af14c items=0 ppid=2266 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:53:58.467000 audit[2374]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.467000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffde90b5380 a2=0 a3=7ffde90b536c items=0 ppid=2266 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:53:58.468000 audit[2375]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.468000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe631cdb70 a2=0 a3=7ffe631cdb5c items=0 ppid=2266 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:53:58.472000 audit[2377]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.472000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe1d1ef970 a2=0 a3=7ffe1d1ef95c items=0 ppid=2266 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:53:58.473000 audit[2378]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.473000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc48b7fab0 a2=0 a3=7ffc48b7fa9c items=0 ppid=2266 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:53:58.475000 audit[2380]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.475000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeeb8657e0 a2=0 a3=7ffeeb8657cc items=0 ppid=2266 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:53:58.478000 audit[2383]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.478000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc003b6790 a2=0 a3=7ffc003b677c items=0 ppid=2266 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:53:58.479000 audit[2384]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.479000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce9a74650 a2=0 a3=7ffce9a7463c items=0 ppid=2266 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:53:58.481000 audit[2386]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.481000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3d59fd90 a2=0 a3=7ffd3d59fd7c items=0 ppid=2266 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:53:58.482000 audit[2387]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.482000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde6ea5cb0 a2=0 a3=7ffde6ea5c9c items=0 ppid=2266 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:53:58.485000 audit[2389]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.485000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8c2b6ad0 a2=0 a3=7ffc8c2b6abc items=0 ppid=2266 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:58.488000 audit[2392]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.488000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe88ce1510 a2=0 a3=7ffe88ce14fc items=0 ppid=2266 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:53:58.491000 audit[2395]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.491000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6fe86870 a2=0 a3=7ffd6fe8685c items=0 ppid=2266 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:53:58.492000 audit[2396]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.492000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd94097090 a2=0 a3=7ffd9409707c items=0 ppid=2266 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:53:58.494000 audit[2398]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.494000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffda9923820 a2=0 a3=7ffda992380c items=0 ppid=2266 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:58.497000 audit[2401]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.497000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff3ca9eae0 a2=0 a3=7fff3ca9eacc items=0 ppid=2266 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:58.498000 audit[2402]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.498000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6e248120 a2=0 a3=7ffe6e24810c items=0 ppid=2266 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:53:58.500000 audit[2404]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.500000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe0a4cb3a0 a2=0 a3=7ffe0a4cb38c items=0 ppid=2266 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:53:58.501000 audit[2405]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.501000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8c916930 a2=0 a3=7fff8c91691c items=0 ppid=2266 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:53:58.504000 audit[2407]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.504000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0c46d280 a2=0 a3=7ffd0c46d26c items=0 ppid=2266 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:58.507000 audit[2410]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:58.507000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5ec26560 a2=0 a3=7ffc5ec2654c items=0 ppid=2266 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:58.510000 audit[2412]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:53:58.510000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffca6a52a00 a2=0 a3=7ffca6a529ec items=0 ppid=2266 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.510000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:58.510000 audit[2412]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:53:58.510000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffca6a52a00 a2=0 a3=7ffca6a529ec items=0 ppid=2266 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:58.510000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:59.057651 kubelet[2119]: E0913 00:53:59.057615 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:59.057651 kubelet[2119]: E0913 00:53:59.057651 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:59.401518 kubelet[2119]: I0913 00:53:59.401248 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9l8sb" podStartSLOduration=3.401231158 podStartE2EDuration="3.401231158s" podCreationTimestamp="2025-09-13 00:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:59.40109958 +0000 UTC m=+7.447639220" watchObservedRunningTime="2025-09-13 00:53:59.401231158 +0000 UTC m=+7.447770798" Sep 13 00:53:59.449164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454308232.mount: Deactivated successfully. Sep 13 00:54:00.058875 kubelet[2119]: E0913 00:54:00.058842 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:00.233141 env[1303]: time="2025-09-13T00:54:00.233091681Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:00.235376 env[1303]: time="2025-09-13T00:54:00.235325306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:00.237174 env[1303]: time="2025-09-13T00:54:00.237144550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:00.238819 env[1303]: time="2025-09-13T00:54:00.238783884Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:00.239245 env[1303]: time="2025-09-13T00:54:00.239215719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:54:00.241023 env[1303]: time="2025-09-13T00:54:00.240997672Z" level=info msg="CreateContainer within sandbox \"7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:54:00.253964 env[1303]: time="2025-09-13T00:54:00.253927716Z" level=info msg="CreateContainer within sandbox \"7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122\"" Sep 13 00:54:00.254331 env[1303]: time="2025-09-13T00:54:00.254304266Z" level=info msg="StartContainer for \"6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122\"" Sep 13 00:54:00.295681 env[1303]: time="2025-09-13T00:54:00.295639190Z" level=info msg="StartContainer for \"6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122\" returns successfully" Sep 13 00:54:02.397276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122-rootfs.mount: Deactivated successfully. Sep 13 00:54:02.658661 env[1303]: time="2025-09-13T00:54:02.648300371Z" level=info msg="shim disconnected" id=6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122 Sep 13 00:54:02.658661 env[1303]: time="2025-09-13T00:54:02.648355254Z" level=warning msg="cleaning up after shim disconnected" id=6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122 namespace=k8s.io Sep 13 00:54:02.658661 env[1303]: time="2025-09-13T00:54:02.648364913Z" level=info msg="cleaning up dead shim" Sep 13 00:54:02.671701 env[1303]: time="2025-09-13T00:54:02.671617671Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n" Sep 13 00:54:02.895551 kubelet[2119]: E0913 00:54:02.895522 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.902936 kubelet[2119]: I0913 00:54:02.902877 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-8vkqq" podStartSLOduration=3.2401965759999998 podStartE2EDuration="5.902858402s" podCreationTimestamp="2025-09-13 00:53:57 +0000 UTC" firstStartedPulling="2025-09-13 00:53:57.577310391 +0000 UTC m=+5.623850031" lastFinishedPulling="2025-09-13 00:54:00.239972227 +0000 UTC m=+8.286511857" observedRunningTime="2025-09-13 00:54:01.136371939 +0000 UTC m=+9.182911579" watchObservedRunningTime="2025-09-13 00:54:02.902858402 +0000 UTC m=+10.949398032" Sep 13 00:54:03.064161 kubelet[2119]: I0913 00:54:03.064046 2119 scope.go:117] "RemoveContainer" containerID="6d78cbe65cc83b03dd5e215035910a8dfeeb3c1648a9d2fa70782b4add540122" Sep 13 00:54:03.065546 env[1303]: time="2025-09-13T00:54:03.065499932Z" level=info msg="CreateContainer within sandbox \"7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:54:03.080956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017122248.mount: Deactivated successfully. Sep 13 00:54:03.089831 env[1303]: time="2025-09-13T00:54:03.089784311Z" level=info msg="CreateContainer within sandbox \"7209fdace74ad98da4d0ce5921a7a04ef67826093901ae24aa8e5eb5aa5b5a53\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b1dd98cca0c5a2ae84b78af5bcfce60a68673ba243db26963fdb1496d9da90a5\"" Sep 13 00:54:03.092163 env[1303]: time="2025-09-13T00:54:03.090943397Z" level=info msg="StartContainer for \"b1dd98cca0c5a2ae84b78af5bcfce60a68673ba243db26963fdb1496d9da90a5\"" Sep 13 00:54:03.138772 env[1303]: time="2025-09-13T00:54:03.138706448Z" level=info msg="StartContainer for \"b1dd98cca0c5a2ae84b78af5bcfce60a68673ba243db26963fdb1496d9da90a5\" returns successfully" Sep 13 00:54:04.123720 update_engine[1295]: I0913 00:54:04.123651 1295 update_attempter.cc:509] Updating boot flags... Sep 13 00:54:05.233487 kubelet[2119]: E0913 00:54:05.233444 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:05.683024 sudo[1467]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:05.681000 audit[1467]: USER_END pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:05.684194 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:54:05.684252 kernel: audit: type=1106 audit(1757724845.681:278): pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:05.681000 audit[1467]: CRED_DISP pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:05.690402 sshd[1463]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:05.690857 kernel: audit: type=1104 audit(1757724845.681:279): pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:05.689000 audit[1463]: USER_END pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:05.691591 kernel: audit: type=1106 audit(1757724845.689:280): pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:05.692692 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:57086.service: Deactivated successfully. Sep 13 00:54:05.693876 systemd-logind[1289]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:54:05.693921 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:54:05.694841 systemd-logind[1289]: Removed session 7. Sep 13 00:54:05.689000 audit[1463]: CRED_DISP pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:05.698886 kernel: audit: type=1104 audit(1757724845.689:281): pid=1463 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:05.698935 kernel: audit: type=1131 audit(1757724845.691:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:57086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:05.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.131:22-10.0.0.1:57086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:06.660000 audit[2581]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.664579 kernel: audit: type=1325 audit(1757724846.660:283): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.660000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffebde06b20 a2=0 a3=7ffebde06b0c items=0 ppid=2266 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.670582 kernel: audit: type=1300 audit(1757724846.660:283): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffebde06b20 a2=0 a3=7ffebde06b0c items=0 ppid=2266 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:06.673581 kernel: audit: type=1327 audit(1757724846.660:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:06.672000 audit[2581]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.680901 kernel: audit: type=1325 audit(1757724846.672:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.680946 kernel: audit: type=1300 audit(1757724846.672:284): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffebde06b20 a2=0 a3=0 items=0 ppid=2266 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.672000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffebde06b20 a2=0 a3=0 items=0 ppid=2266 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:06.685000 audit[2583]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.685000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd11471c90 a2=0 a3=7ffd11471c7c items=0 ppid=2266 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:06.692000 audit[2583]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:06.692000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd11471c90 a2=0 a3=0 items=0 ppid=2266 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:06.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:08.440000 audit[2585]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:08.440000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd954c71e0 a2=0 a3=7ffd954c71cc items=0 ppid=2266 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:08.446000 audit[2585]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:08.446000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd954c71e0 a2=0 a3=0 items=0 ppid=2266 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:08.512910 kubelet[2119]: W0913 00:54:08.512825 2119 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 13 00:54:08.512910 kubelet[2119]: E0913 00:54:08.512877 2119 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:54:08.631455 kubelet[2119]: I0913 00:54:08.631409 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ee55794-40a5-43ff-8187-6ce08ee44f76-tigera-ca-bundle\") pod \"calico-typha-5fddb4c47c-x6dnv\" (UID: \"1ee55794-40a5-43ff-8187-6ce08ee44f76\") " pod="calico-system/calico-typha-5fddb4c47c-x6dnv" Sep 13 00:54:08.631455 kubelet[2119]: I0913 00:54:08.631449 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltkq\" (UniqueName: \"kubernetes.io/projected/1ee55794-40a5-43ff-8187-6ce08ee44f76-kube-api-access-gltkq\") pod \"calico-typha-5fddb4c47c-x6dnv\" (UID: \"1ee55794-40a5-43ff-8187-6ce08ee44f76\") " pod="calico-system/calico-typha-5fddb4c47c-x6dnv" Sep 13 00:54:08.631455 kubelet[2119]: I0913 00:54:08.631468 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ee55794-40a5-43ff-8187-6ce08ee44f76-typha-certs\") pod \"calico-typha-5fddb4c47c-x6dnv\" (UID: \"1ee55794-40a5-43ff-8187-6ce08ee44f76\") " pod="calico-system/calico-typha-5fddb4c47c-x6dnv" Sep 13 00:54:09.033810 kubelet[2119]: I0913 00:54:09.033761 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-var-run-calico\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.033810 kubelet[2119]: I0913 00:54:09.033800 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-cni-log-dir\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.033810 kubelet[2119]: I0913 00:54:09.033815 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-cni-bin-dir\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034051 kubelet[2119]: I0913 00:54:09.033828 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-flexvol-driver-host\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034051 kubelet[2119]: I0913 00:54:09.033845 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bac9eb35-17de-483b-b688-cdafecf92b02-node-certs\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034051 kubelet[2119]: I0913 00:54:09.033858 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac9eb35-17de-483b-b688-cdafecf92b02-tigera-ca-bundle\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034051 kubelet[2119]: I0913 00:54:09.033872 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-policysync\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034051 kubelet[2119]: I0913 00:54:09.033886 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-lib-modules\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034171 kubelet[2119]: I0913 00:54:09.033912 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmpj\" (UniqueName: \"kubernetes.io/projected/bac9eb35-17de-483b-b688-cdafecf92b02-kube-api-access-hfmpj\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034171 kubelet[2119]: I0913 00:54:09.033927 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-var-lib-calico\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034171 kubelet[2119]: I0913 00:54:09.033940 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-xtables-lock\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.034171 kubelet[2119]: I0913 00:54:09.033960 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bac9eb35-17de-483b-b688-cdafecf92b02-cni-net-dir\") pod \"calico-node-zn578\" (UID: \"bac9eb35-17de-483b-b688-cdafecf92b02\") " pod="calico-system/calico-node-zn578" Sep 13 00:54:09.137438 kubelet[2119]: E0913 00:54:09.137406 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.137438 kubelet[2119]: W0913 00:54:09.137426 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.137438 kubelet[2119]: E0913 00:54:09.137461 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.141504 kubelet[2119]: E0913 00:54:09.141485 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.141504 kubelet[2119]: W0913 00:54:09.141499 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.141612 kubelet[2119]: E0913 00:54:09.141512 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.145132 kubelet[2119]: E0913 00:54:09.145101 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.145132 kubelet[2119]: W0913 00:54:09.145123 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.145292 kubelet[2119]: E0913 00:54:09.145142 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.191960 kubelet[2119]: E0913 00:54:09.191899 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:09.197427 env[1303]: time="2025-09-13T00:54:09.197382594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zn578,Uid:bac9eb35-17de-483b-b688-cdafecf92b02,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:09.218685 env[1303]: time="2025-09-13T00:54:09.218597589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:09.218685 env[1303]: time="2025-09-13T00:54:09.218633767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:09.218685 env[1303]: time="2025-09-13T00:54:09.218644148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:09.218881 env[1303]: time="2025-09-13T00:54:09.218833534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00 pid=2610 runtime=io.containerd.runc.v2 Sep 13 00:54:09.238744 kubelet[2119]: E0913 00:54:09.238717 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.238744 kubelet[2119]: W0913 00:54:09.238739 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.238939 kubelet[2119]: E0913 00:54:09.238759 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.239141 kubelet[2119]: E0913 00:54:09.239112 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.239141 kubelet[2119]: W0913 00:54:09.239138 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.239216 kubelet[2119]: E0913 00:54:09.239162 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.239463 kubelet[2119]: E0913 00:54:09.239446 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.239528 kubelet[2119]: W0913 00:54:09.239458 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.239528 kubelet[2119]: E0913 00:54:09.239478 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.239714 kubelet[2119]: E0913 00:54:09.239698 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.239714 kubelet[2119]: W0913 00:54:09.239712 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.239804 kubelet[2119]: E0913 00:54:09.239724 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.240124 kubelet[2119]: E0913 00:54:09.239958 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.240124 kubelet[2119]: W0913 00:54:09.240116 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.240124 kubelet[2119]: E0913 00:54:09.240125 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.240488 kubelet[2119]: E0913 00:54:09.240466 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.240488 kubelet[2119]: W0913 00:54:09.240484 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.240607 kubelet[2119]: E0913 00:54:09.240502 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.242993 kubelet[2119]: E0913 00:54:09.242967 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.242993 kubelet[2119]: W0913 00:54:09.242981 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.242993 kubelet[2119]: E0913 00:54:09.242990 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.243188 kubelet[2119]: E0913 00:54:09.243172 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.243188 kubelet[2119]: W0913 00:54:09.243182 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.243188 kubelet[2119]: E0913 00:54:09.243190 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.243405 kubelet[2119]: E0913 00:54:09.243374 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.243405 kubelet[2119]: W0913 00:54:09.243388 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.243405 kubelet[2119]: E0913 00:54:09.243397 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.243601 kubelet[2119]: E0913 00:54:09.243586 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.243601 kubelet[2119]: W0913 00:54:09.243597 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.243706 kubelet[2119]: E0913 00:54:09.243606 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.243796 kubelet[2119]: E0913 00:54:09.243780 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.243796 kubelet[2119]: W0913 00:54:09.243790 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.243796 kubelet[2119]: E0913 00:54:09.243798 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.243955 kubelet[2119]: E0913 00:54:09.243939 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.243955 kubelet[2119]: W0913 00:54:09.243950 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244030 kubelet[2119]: E0913 00:54:09.243958 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244115 kubelet[2119]: E0913 00:54:09.244101 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244115 kubelet[2119]: W0913 00:54:09.244111 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244166 kubelet[2119]: E0913 00:54:09.244119 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244271 kubelet[2119]: E0913 00:54:09.244259 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244271 kubelet[2119]: W0913 00:54:09.244269 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244322 kubelet[2119]: E0913 00:54:09.244276 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244424 kubelet[2119]: E0913 00:54:09.244408 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244424 kubelet[2119]: W0913 00:54:09.244418 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244424 kubelet[2119]: E0913 00:54:09.244426 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244633 kubelet[2119]: E0913 00:54:09.244620 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244633 kubelet[2119]: W0913 00:54:09.244630 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244705 kubelet[2119]: E0913 00:54:09.244638 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244819 kubelet[2119]: E0913 00:54:09.244803 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244819 kubelet[2119]: W0913 00:54:09.244813 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244819 kubelet[2119]: E0913 00:54:09.244820 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.244982 kubelet[2119]: E0913 00:54:09.244967 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.244982 kubelet[2119]: W0913 00:54:09.244977 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.244982 kubelet[2119]: E0913 00:54:09.244984 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.245128 kubelet[2119]: E0913 00:54:09.245114 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.245128 kubelet[2119]: W0913 00:54:09.245124 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.245180 kubelet[2119]: E0913 00:54:09.245131 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.245298 kubelet[2119]: E0913 00:54:09.245287 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.245298 kubelet[2119]: W0913 00:54:09.245296 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.245349 kubelet[2119]: E0913 00:54:09.245304 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.245486 kubelet[2119]: E0913 00:54:09.245472 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.245486 kubelet[2119]: W0913 00:54:09.245482 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.245590 kubelet[2119]: E0913 00:54:09.245492 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.252197 env[1303]: time="2025-09-13T00:54:09.252159744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zn578,Uid:bac9eb35-17de-483b-b688-cdafecf92b02,Namespace:calico-system,Attempt:0,} returns sandbox id \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\"" Sep 13 00:54:09.254538 env[1303]: time="2025-09-13T00:54:09.253709221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:54:09.339927 kubelet[2119]: E0913 00:54:09.339800 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.339927 kubelet[2119]: W0913 00:54:09.339823 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.339927 kubelet[2119]: E0913 00:54:09.339843 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.339927 kubelet[2119]: I0913 00:54:09.339870 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad838603-c026-4e41-bf47-8168df866652-registration-dir\") pod \"csi-node-driver-wzvl6\" (UID: \"ad838603-c026-4e41-bf47-8168df866652\") " pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:09.340171 kubelet[2119]: E0913 00:54:09.340104 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.340171 kubelet[2119]: W0913 00:54:09.340132 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.340171 kubelet[2119]: E0913 00:54:09.340157 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.340252 kubelet[2119]: I0913 00:54:09.340188 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad838603-c026-4e41-bf47-8168df866652-varrun\") pod \"csi-node-driver-wzvl6\" (UID: \"ad838603-c026-4e41-bf47-8168df866652\") " pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:09.340742 kubelet[2119]: E0913 00:54:09.340494 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.340742 kubelet[2119]: W0913 00:54:09.340528 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.340742 kubelet[2119]: E0913 00:54:09.340572 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.340742 kubelet[2119]: I0913 00:54:09.340606 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdzk\" (UniqueName: \"kubernetes.io/projected/ad838603-c026-4e41-bf47-8168df866652-kube-api-access-9kdzk\") pod \"csi-node-driver-wzvl6\" (UID: \"ad838603-c026-4e41-bf47-8168df866652\") " pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:09.340944 kubelet[2119]: E0913 00:54:09.340919 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.340944 kubelet[2119]: W0913 00:54:09.340934 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.340944 kubelet[2119]: E0913 00:54:09.340945 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.341156 kubelet[2119]: E0913 00:54:09.341131 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.341156 kubelet[2119]: W0913 00:54:09.341138 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.341156 kubelet[2119]: E0913 00:54:09.341145 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.341254 kubelet[2119]: I0913 00:54:09.341158 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad838603-c026-4e41-bf47-8168df866652-kubelet-dir\") pod \"csi-node-driver-wzvl6\" (UID: \"ad838603-c026-4e41-bf47-8168df866652\") " pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:09.342524 kubelet[2119]: E0913 00:54:09.341872 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.342524 kubelet[2119]: W0913 00:54:09.341885 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.342524 kubelet[2119]: E0913 00:54:09.341918 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.342524 kubelet[2119]: I0913 00:54:09.341950 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad838603-c026-4e41-bf47-8168df866652-socket-dir\") pod \"csi-node-driver-wzvl6\" (UID: \"ad838603-c026-4e41-bf47-8168df866652\") " pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:09.342524 kubelet[2119]: E0913 00:54:09.342078 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.342524 kubelet[2119]: W0913 00:54:09.342089 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.342524 kubelet[2119]: E0913 00:54:09.342130 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.342524 kubelet[2119]: E0913 00:54:09.342305 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.342524 kubelet[2119]: W0913 00:54:09.342315 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.342792 kubelet[2119]: E0913 00:54:09.342342 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.342792 kubelet[2119]: E0913 00:54:09.342484 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.342792 kubelet[2119]: W0913 00:54:09.342496 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.342792 kubelet[2119]: E0913 00:54:09.342519 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.342792 kubelet[2119]: E0913 00:54:09.342712 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.342792 kubelet[2119]: W0913 00:54:09.342723 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.342792 kubelet[2119]: E0913 00:54:09.342754 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343003 kubelet[2119]: E0913 00:54:09.342930 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343003 kubelet[2119]: W0913 00:54:09.342940 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343003 kubelet[2119]: E0913 00:54:09.342957 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343114 kubelet[2119]: E0913 00:54:09.343101 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343114 kubelet[2119]: W0913 00:54:09.343111 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343168 kubelet[2119]: E0913 00:54:09.343120 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343287 kubelet[2119]: E0913 00:54:09.343275 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343287 kubelet[2119]: W0913 00:54:09.343285 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343334 kubelet[2119]: E0913 00:54:09.343292 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343450 kubelet[2119]: E0913 00:54:09.343438 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343450 kubelet[2119]: W0913 00:54:09.343448 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343497 kubelet[2119]: E0913 00:54:09.343456 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343630 kubelet[2119]: E0913 00:54:09.343618 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343657 kubelet[2119]: W0913 00:54:09.343631 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343657 kubelet[2119]: E0913 00:54:09.343641 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.343828 kubelet[2119]: E0913 00:54:09.343811 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.343828 kubelet[2119]: W0913 00:54:09.343824 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.343900 kubelet[2119]: E0913 00:54:09.343834 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.442748 kubelet[2119]: E0913 00:54:09.442712 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.442748 kubelet[2119]: W0913 00:54:09.442738 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.442748 kubelet[2119]: E0913 00:54:09.442760 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.442982 kubelet[2119]: E0913 00:54:09.442953 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.442982 kubelet[2119]: W0913 00:54:09.442961 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.442982 kubelet[2119]: E0913 00:54:09.442969 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.443282 kubelet[2119]: E0913 00:54:09.443247 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.443282 kubelet[2119]: W0913 00:54:09.443271 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.443470 kubelet[2119]: E0913 00:54:09.443341 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.443649 kubelet[2119]: E0913 00:54:09.443620 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.443649 kubelet[2119]: W0913 00:54:09.443643 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.443752 kubelet[2119]: E0913 00:54:09.443676 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.443844 kubelet[2119]: E0913 00:54:09.443830 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.443844 kubelet[2119]: W0913 00:54:09.443841 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.443903 kubelet[2119]: E0913 00:54:09.443848 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.444029 kubelet[2119]: E0913 00:54:09.444007 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.444029 kubelet[2119]: W0913 00:54:09.444022 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.444117 kubelet[2119]: E0913 00:54:09.444038 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.444300 kubelet[2119]: E0913 00:54:09.444276 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.444300 kubelet[2119]: W0913 00:54:09.444288 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.444300 kubelet[2119]: E0913 00:54:09.444301 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.444482 kubelet[2119]: E0913 00:54:09.444465 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.444482 kubelet[2119]: W0913 00:54:09.444478 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.444554 kubelet[2119]: E0913 00:54:09.444491 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.444798 kubelet[2119]: E0913 00:54:09.444782 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.444798 kubelet[2119]: W0913 00:54:09.444794 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.444879 kubelet[2119]: E0913 00:54:09.444830 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.445005 kubelet[2119]: E0913 00:54:09.444987 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.445005 kubelet[2119]: W0913 00:54:09.445001 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.445104 kubelet[2119]: E0913 00:54:09.445087 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.445200 kubelet[2119]: E0913 00:54:09.445186 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.445200 kubelet[2119]: W0913 00:54:09.445196 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.445285 kubelet[2119]: E0913 00:54:09.445209 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.445392 kubelet[2119]: E0913 00:54:09.445381 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.445392 kubelet[2119]: W0913 00:54:09.445389 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.445481 kubelet[2119]: E0913 00:54:09.445401 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.445581 kubelet[2119]: E0913 00:54:09.445551 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.445581 kubelet[2119]: W0913 00:54:09.445574 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.445640 kubelet[2119]: E0913 00:54:09.445586 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.445900 kubelet[2119]: E0913 00:54:09.445850 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.445900 kubelet[2119]: W0913 00:54:09.445861 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.445900 kubelet[2119]: E0913 00:54:09.445874 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.446129 kubelet[2119]: E0913 00:54:09.446112 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.446129 kubelet[2119]: W0913 00:54:09.446124 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.446221 kubelet[2119]: E0913 00:54:09.446135 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.446369 kubelet[2119]: E0913 00:54:09.446351 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.446369 kubelet[2119]: W0913 00:54:09.446366 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.446447 kubelet[2119]: E0913 00:54:09.446401 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.446600 kubelet[2119]: E0913 00:54:09.446552 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.446600 kubelet[2119]: W0913 00:54:09.446588 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.446692 kubelet[2119]: E0913 00:54:09.446623 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.446864 kubelet[2119]: E0913 00:54:09.446845 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.446864 kubelet[2119]: W0913 00:54:09.446859 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.446940 kubelet[2119]: E0913 00:54:09.446873 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.447095 kubelet[2119]: E0913 00:54:09.447079 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.447095 kubelet[2119]: W0913 00:54:09.447092 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.447169 kubelet[2119]: E0913 00:54:09.447107 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.447307 kubelet[2119]: E0913 00:54:09.447289 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.447307 kubelet[2119]: W0913 00:54:09.447302 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.447395 kubelet[2119]: E0913 00:54:09.447317 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.447485 kubelet[2119]: E0913 00:54:09.447469 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.447485 kubelet[2119]: W0913 00:54:09.447485 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.447592 kubelet[2119]: E0913 00:54:09.447498 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.447713 kubelet[2119]: E0913 00:54:09.447685 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.447713 kubelet[2119]: W0913 00:54:09.447697 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.447713 kubelet[2119]: E0913 00:54:09.447710 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.447927 kubelet[2119]: E0913 00:54:09.447910 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.447927 kubelet[2119]: W0913 00:54:09.447921 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.448006 kubelet[2119]: E0913 00:54:09.447954 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.448125 kubelet[2119]: E0913 00:54:09.448108 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.448125 kubelet[2119]: W0913 00:54:09.448121 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.448213 kubelet[2119]: E0913 00:54:09.448160 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.448326 kubelet[2119]: E0913 00:54:09.448299 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.448326 kubelet[2119]: W0913 00:54:09.448311 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.448326 kubelet[2119]: E0913 00:54:09.448324 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.448546 kubelet[2119]: E0913 00:54:09.448470 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.448546 kubelet[2119]: W0913 00:54:09.448478 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.448546 kubelet[2119]: E0913 00:54:09.448486 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.451284 kubelet[2119]: E0913 00:54:09.451257 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.451284 kubelet[2119]: W0913 00:54:09.451270 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.451284 kubelet[2119]: E0913 00:54:09.451279 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.456000 audit[2709]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:09.456000 audit[2709]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc73a3030 a2=0 a3=7ffcc73a301c items=0 ppid=2266 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:09.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:09.462000 audit[2709]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:09.462000 audit[2709]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc73a3030 a2=0 a3=0 items=0 ppid=2266 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:09.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:09.546843 kubelet[2119]: E0913 00:54:09.546818 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.546843 kubelet[2119]: W0913 00:54:09.546835 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.547202 kubelet[2119]: E0913 00:54:09.546853 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.647403 kubelet[2119]: E0913 00:54:09.647305 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.647403 kubelet[2119]: W0913 00:54:09.647325 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.647403 kubelet[2119]: E0913 00:54:09.647346 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.732931 kubelet[2119]: E0913 00:54:09.732882 2119 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 00:54:09.733073 kubelet[2119]: E0913 00:54:09.732999 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ee55794-40a5-43ff-8187-6ce08ee44f76-typha-certs podName:1ee55794-40a5-43ff-8187-6ce08ee44f76 nodeName:}" failed. No retries permitted until 2025-09-13 00:54:10.232967992 +0000 UTC m=+18.279507632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/1ee55794-40a5-43ff-8187-6ce08ee44f76-typha-certs") pod "calico-typha-5fddb4c47c-x6dnv" (UID: "1ee55794-40a5-43ff-8187-6ce08ee44f76") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:54:09.748352 kubelet[2119]: E0913 00:54:09.748319 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.748352 kubelet[2119]: W0913 00:54:09.748342 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.748447 kubelet[2119]: E0913 00:54:09.748363 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.849642 kubelet[2119]: E0913 00:54:09.849595 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.849642 kubelet[2119]: W0913 00:54:09.849622 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.849642 kubelet[2119]: E0913 00:54:09.849653 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:09.950727 kubelet[2119]: E0913 00:54:09.950695 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:09.950727 kubelet[2119]: W0913 00:54:09.950722 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:09.950909 kubelet[2119]: E0913 00:54:09.950746 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.051291 kubelet[2119]: E0913 00:54:10.051258 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.051291 kubelet[2119]: W0913 00:54:10.051275 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.051291 kubelet[2119]: E0913 00:54:10.051291 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.152159 kubelet[2119]: E0913 00:54:10.152124 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.152159 kubelet[2119]: W0913 00:54:10.152146 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.152159 kubelet[2119]: E0913 00:54:10.152168 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.253388 kubelet[2119]: E0913 00:54:10.253261 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.253388 kubelet[2119]: W0913 00:54:10.253306 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.253388 kubelet[2119]: E0913 00:54:10.253334 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.253729 kubelet[2119]: E0913 00:54:10.253686 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.253729 kubelet[2119]: W0913 00:54:10.253702 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.253729 kubelet[2119]: E0913 00:54:10.253713 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.254098 kubelet[2119]: E0913 00:54:10.254068 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.254098 kubelet[2119]: W0913 00:54:10.254094 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.254187 kubelet[2119]: E0913 00:54:10.254122 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.254392 kubelet[2119]: E0913 00:54:10.254379 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.254392 kubelet[2119]: W0913 00:54:10.254388 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.254487 kubelet[2119]: E0913 00:54:10.254397 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.254666 kubelet[2119]: E0913 00:54:10.254644 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.254666 kubelet[2119]: W0913 00:54:10.254664 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.254734 kubelet[2119]: E0913 00:54:10.254672 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.261730 kubelet[2119]: E0913 00:54:10.261691 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:10.261730 kubelet[2119]: W0913 00:54:10.261710 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:10.261730 kubelet[2119]: E0913 00:54:10.261725 2119 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:10.305892 kubelet[2119]: E0913 00:54:10.305839 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:10.306308 env[1303]: time="2025-09-13T00:54:10.306249390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fddb4c47c-x6dnv,Uid:1ee55794-40a5-43ff-8187-6ce08ee44f76,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:10.332670 env[1303]: time="2025-09-13T00:54:10.332555614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:10.332670 env[1303]: time="2025-09-13T00:54:10.332621728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:10.332670 env[1303]: time="2025-09-13T00:54:10.332632178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:10.332844 env[1303]: time="2025-09-13T00:54:10.332801676Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e8037bac738d5158bbe1cc637cc6f898fab95dadd1e97c3e653a7c969a465f7 pid=2731 runtime=io.containerd.runc.v2 Sep 13 00:54:10.377066 env[1303]: time="2025-09-13T00:54:10.377026112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fddb4c47c-x6dnv,Uid:1ee55794-40a5-43ff-8187-6ce08ee44f76,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e8037bac738d5158bbe1cc637cc6f898fab95dadd1e97c3e653a7c969a465f7\"" Sep 13 00:54:10.379294 kubelet[2119]: E0913 00:54:10.378850 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:10.666256 env[1303]: time="2025-09-13T00:54:10.666187894Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:10.667964 env[1303]: time="2025-09-13T00:54:10.667924594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:10.669702 env[1303]: time="2025-09-13T00:54:10.669641877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:10.671212 env[1303]: time="2025-09-13T00:54:10.671182186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:10.671817 env[1303]: time="2025-09-13T00:54:10.671776475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:54:10.674600 env[1303]: time="2025-09-13T00:54:10.674028364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:54:10.674600 env[1303]: time="2025-09-13T00:54:10.674456049Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:54:10.688387 env[1303]: time="2025-09-13T00:54:10.688339536Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319\"" Sep 13 00:54:10.689024 env[1303]: time="2025-09-13T00:54:10.688972487Z" level=info msg="StartContainer for \"7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319\"" Sep 13 00:54:10.775674 env[1303]: time="2025-09-13T00:54:10.775098094Z" level=info msg="StartContainer for \"7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319\" returns successfully" Sep 13 00:54:10.789236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319-rootfs.mount: Deactivated successfully. Sep 13 00:54:10.804627 env[1303]: time="2025-09-13T00:54:10.804586978Z" level=info msg="shim disconnected" id=7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319 Sep 13 00:54:10.804745 env[1303]: time="2025-09-13T00:54:10.804628576Z" level=warning msg="cleaning up after shim disconnected" id=7fde84d26f86e6288f495d983cfc63264430fd0ff6bd4c3313b4971ef8694319 namespace=k8s.io Sep 13 00:54:10.804745 env[1303]: time="2025-09-13T00:54:10.804638725Z" level=info msg="cleaning up dead shim" Sep 13 00:54:10.810843 env[1303]: time="2025-09-13T00:54:10.810807017Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2811 runtime=io.containerd.runc.v2\n" Sep 13 00:54:11.039277 kubelet[2119]: E0913 00:54:11.039141 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:12.858416 env[1303]: time="2025-09-13T00:54:12.858363023Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:12.860468 env[1303]: time="2025-09-13T00:54:12.860430453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:12.861923 env[1303]: time="2025-09-13T00:54:12.861884461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:12.863407 env[1303]: time="2025-09-13T00:54:12.863378482Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:12.863926 env[1303]: time="2025-09-13T00:54:12.863895464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:54:12.864824 env[1303]: time="2025-09-13T00:54:12.864780120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:54:12.871121 env[1303]: time="2025-09-13T00:54:12.870767517Z" level=info msg="CreateContainer within sandbox \"5e8037bac738d5158bbe1cc637cc6f898fab95dadd1e97c3e653a7c969a465f7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:54:12.890899 env[1303]: time="2025-09-13T00:54:12.890840513Z" level=info msg="CreateContainer within sandbox \"5e8037bac738d5158bbe1cc637cc6f898fab95dadd1e97c3e653a7c969a465f7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"881cf25f7aed38dafcd91b5b64c643ceda2acbc9ba13a689e0a97ae74f1ec5a2\"" Sep 13 00:54:12.891413 env[1303]: time="2025-09-13T00:54:12.891382844Z" level=info msg="StartContainer for \"881cf25f7aed38dafcd91b5b64c643ceda2acbc9ba13a689e0a97ae74f1ec5a2\"" Sep 13 00:54:12.947769 env[1303]: time="2025-09-13T00:54:12.947711940Z" level=info msg="StartContainer for \"881cf25f7aed38dafcd91b5b64c643ceda2acbc9ba13a689e0a97ae74f1ec5a2\" returns successfully" Sep 13 00:54:13.038873 kubelet[2119]: E0913 00:54:13.038796 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:13.081974 kubelet[2119]: E0913 00:54:13.081931 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.094986 kubelet[2119]: I0913 00:54:13.094692 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fddb4c47c-x6dnv" podStartSLOduration=2.609527053 podStartE2EDuration="5.094671258s" podCreationTimestamp="2025-09-13 00:54:08 +0000 UTC" firstStartedPulling="2025-09-13 00:54:10.379495992 +0000 UTC m=+18.426035632" lastFinishedPulling="2025-09-13 00:54:12.864640197 +0000 UTC m=+20.911179837" observedRunningTime="2025-09-13 00:54:13.094061751 +0000 UTC m=+21.140601391" watchObservedRunningTime="2025-09-13 00:54:13.094671258 +0000 UTC m=+21.141210898" Sep 13 00:54:14.083272 kubelet[2119]: I0913 00:54:14.083242 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:14.083678 kubelet[2119]: E0913 00:54:14.083619 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:15.039297 kubelet[2119]: E0913 00:54:15.039239 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:16.450280 env[1303]: time="2025-09-13T00:54:16.450219994Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:16.451911 env[1303]: time="2025-09-13T00:54:16.451863125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:16.453665 env[1303]: time="2025-09-13T00:54:16.453628014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:16.455311 env[1303]: time="2025-09-13T00:54:16.455265094Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:16.455821 env[1303]: time="2025-09-13T00:54:16.455791033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:54:16.457650 env[1303]: time="2025-09-13T00:54:16.457622899Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:54:16.473349 env[1303]: time="2025-09-13T00:54:16.473309172Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c\"" Sep 13 00:54:16.473745 env[1303]: time="2025-09-13T00:54:16.473725014Z" level=info msg="StartContainer for \"954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c\"" Sep 13 00:54:17.075174 kubelet[2119]: E0913 00:54:17.075114 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:17.100508 env[1303]: time="2025-09-13T00:54:17.100439140Z" level=info msg="StartContainer for \"954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c\" returns successfully" Sep 13 00:54:18.285368 env[1303]: time="2025-09-13T00:54:18.285300966Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:18.306107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c-rootfs.mount: Deactivated successfully. Sep 13 00:54:18.308855 env[1303]: time="2025-09-13T00:54:18.308809451Z" level=info msg="shim disconnected" id=954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c Sep 13 00:54:18.309040 env[1303]: time="2025-09-13T00:54:18.309008295Z" level=warning msg="cleaning up after shim disconnected" id=954b766f2df77fb755563fc7eb4386934491e73e2b4312e56432dbc61106628c namespace=k8s.io Sep 13 00:54:18.309040 env[1303]: time="2025-09-13T00:54:18.309024927Z" level=info msg="cleaning up dead shim" Sep 13 00:54:18.316696 env[1303]: time="2025-09-13T00:54:18.316647751Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Sep 13 00:54:18.324768 kubelet[2119]: I0913 00:54:18.324728 2119 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:54:18.512345 kubelet[2119]: I0913 00:54:18.512289 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mvm\" (UniqueName: \"kubernetes.io/projected/20e33070-d374-477c-b056-d9ebed8bda5f-kube-api-access-58mvm\") pod \"calico-apiserver-966dc6bcb-r4qg4\" (UID: \"20e33070-d374-477c-b056-d9ebed8bda5f\") " pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" Sep 13 00:54:18.512579 kubelet[2119]: I0913 00:54:18.512409 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlwbr\" (UniqueName: \"kubernetes.io/projected/29726fd2-0f28-42d0-a860-baf11550e993-kube-api-access-rlwbr\") pod \"coredns-7c65d6cfc9-xqk2c\" (UID: \"29726fd2-0f28-42d0-a860-baf11550e993\") " pod="kube-system/coredns-7c65d6cfc9-xqk2c" Sep 13 00:54:18.512579 kubelet[2119]: I0913 00:54:18.512481 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5d081d4-7d87-4234-8171-fc6646bb9f9b-config-volume\") pod \"coredns-7c65d6cfc9-dqtkm\" (UID: \"b5d081d4-7d87-4234-8171-fc6646bb9f9b\") " pod="kube-system/coredns-7c65d6cfc9-dqtkm" Sep 13 00:54:18.512579 kubelet[2119]: I0913 00:54:18.512516 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2542adf-ca6b-4757-9a4c-0ba349d6ae47-goldmane-ca-bundle\") pod \"goldmane-7988f88666-whpk5\" (UID: \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\") " pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.512665 kubelet[2119]: I0913 00:54:18.512582 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntsp\" (UniqueName: \"kubernetes.io/projected/e8351b28-7c27-4f21-ad76-83e2e206ba63-kube-api-access-xntsp\") pod \"calico-kube-controllers-798cbcbbb6-8gg8t\" (UID: \"e8351b28-7c27-4f21-ad76-83e2e206ba63\") " pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" Sep 13 00:54:18.512665 kubelet[2119]: I0913 00:54:18.512616 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-ca-bundle\") pod \"whisker-55cf57d69d-jj7b6\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " pod="calico-system/whisker-55cf57d69d-jj7b6" Sep 13 00:54:18.512665 kubelet[2119]: I0913 00:54:18.512636 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d2542adf-ca6b-4757-9a4c-0ba349d6ae47-goldmane-key-pair\") pod \"goldmane-7988f88666-whpk5\" (UID: \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\") " pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.512750 kubelet[2119]: I0913 00:54:18.512664 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20e33070-d374-477c-b056-d9ebed8bda5f-calico-apiserver-certs\") pod \"calico-apiserver-966dc6bcb-r4qg4\" (UID: \"20e33070-d374-477c-b056-d9ebed8bda5f\") " pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" Sep 13 00:54:18.512750 kubelet[2119]: I0913 00:54:18.512712 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tzpx\" (UniqueName: \"kubernetes.io/projected/b5d081d4-7d87-4234-8171-fc6646bb9f9b-kube-api-access-5tzpx\") pod \"coredns-7c65d6cfc9-dqtkm\" (UID: \"b5d081d4-7d87-4234-8171-fc6646bb9f9b\") " pod="kube-system/coredns-7c65d6cfc9-dqtkm" Sep 13 00:54:18.512801 kubelet[2119]: I0913 00:54:18.512770 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrplm\" (UniqueName: \"kubernetes.io/projected/4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c-kube-api-access-mrplm\") pod \"calico-apiserver-966dc6bcb-g8gcj\" (UID: \"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c\") " pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" Sep 13 00:54:18.512801 kubelet[2119]: I0913 00:54:18.512791 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8351b28-7c27-4f21-ad76-83e2e206ba63-tigera-ca-bundle\") pod \"calico-kube-controllers-798cbcbbb6-8gg8t\" (UID: \"e8351b28-7c27-4f21-ad76-83e2e206ba63\") " pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" Sep 13 00:54:18.512863 kubelet[2119]: I0913 00:54:18.512811 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2542adf-ca6b-4757-9a4c-0ba349d6ae47-config\") pod \"goldmane-7988f88666-whpk5\" (UID: \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\") " pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.512863 kubelet[2119]: I0913 00:54:18.512848 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c-calico-apiserver-certs\") pod \"calico-apiserver-966dc6bcb-g8gcj\" (UID: \"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c\") " pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" Sep 13 00:54:18.512931 kubelet[2119]: I0913 00:54:18.512866 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584pk\" (UniqueName: \"kubernetes.io/projected/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-kube-api-access-584pk\") pod \"whisker-55cf57d69d-jj7b6\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " pod="calico-system/whisker-55cf57d69d-jj7b6" Sep 13 00:54:18.512931 kubelet[2119]: I0913 00:54:18.512904 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29726fd2-0f28-42d0-a860-baf11550e993-config-volume\") pod \"coredns-7c65d6cfc9-xqk2c\" (UID: \"29726fd2-0f28-42d0-a860-baf11550e993\") " pod="kube-system/coredns-7c65d6cfc9-xqk2c" Sep 13 00:54:18.512983 kubelet[2119]: I0913 00:54:18.512945 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-backend-key-pair\") pod \"whisker-55cf57d69d-jj7b6\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " pod="calico-system/whisker-55cf57d69d-jj7b6" Sep 13 00:54:18.513031 kubelet[2119]: I0913 00:54:18.512962 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsxz\" (UniqueName: \"kubernetes.io/projected/d2542adf-ca6b-4757-9a4c-0ba349d6ae47-kube-api-access-sxsxz\") pod \"goldmane-7988f88666-whpk5\" (UID: \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\") " pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.653541 kubelet[2119]: E0913 00:54:18.653445 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:18.654174 env[1303]: time="2025-09-13T00:54:18.654128947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqk2c,Uid:29726fd2-0f28-42d0-a860-baf11550e993,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:18.667325 kubelet[2119]: E0913 00:54:18.667297 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:18.667819 env[1303]: time="2025-09-13T00:54:18.667764688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqtkm,Uid:b5d081d4-7d87-4234-8171-fc6646bb9f9b,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:18.669226 env[1303]: time="2025-09-13T00:54:18.669203153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cbcbbb6-8gg8t,Uid:e8351b28-7c27-4f21-ad76-83e2e206ba63,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:18.674369 env[1303]: time="2025-09-13T00:54:18.673847486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-g8gcj,Uid:4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:18.676046 env[1303]: time="2025-09-13T00:54:18.676003079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55cf57d69d-jj7b6,Uid:70415598-0c6b-4edf-8bd8-a17e4dfe2ca9,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:18.676551 env[1303]: time="2025-09-13T00:54:18.676519390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-whpk5,Uid:d2542adf-ca6b-4757-9a4c-0ba349d6ae47,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:18.678604 env[1303]: time="2025-09-13T00:54:18.678543886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-r4qg4,Uid:20e33070-d374-477c-b056-d9ebed8bda5f,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:18.733683 env[1303]: time="2025-09-13T00:54:18.733618249Z" level=error msg="Failed to destroy network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.734835 env[1303]: time="2025-09-13T00:54:18.734794241Z" level=error msg="encountered an error cleaning up failed sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.734900 env[1303]: time="2025-09-13T00:54:18.734841650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqk2c,Uid:29726fd2-0f28-42d0-a860-baf11550e993,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.735223 kubelet[2119]: E0913 00:54:18.735076 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.735223 kubelet[2119]: E0913 00:54:18.735153 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xqk2c" Sep 13 00:54:18.735223 kubelet[2119]: E0913 00:54:18.735177 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xqk2c" Sep 13 00:54:18.735865 kubelet[2119]: E0913 00:54:18.735813 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xqk2c_kube-system(29726fd2-0f28-42d0-a860-baf11550e993)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xqk2c_kube-system(29726fd2-0f28-42d0-a860-baf11550e993)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xqk2c" podUID="29726fd2-0f28-42d0-a860-baf11550e993" Sep 13 00:54:18.825617 env[1303]: time="2025-09-13T00:54:18.825531485Z" level=error msg="Failed to destroy network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.825925 env[1303]: time="2025-09-13T00:54:18.825896531Z" level=error msg="encountered an error cleaning up failed sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.825978 env[1303]: time="2025-09-13T00:54:18.825957256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cbcbbb6-8gg8t,Uid:e8351b28-7c27-4f21-ad76-83e2e206ba63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.826199 kubelet[2119]: E0913 00:54:18.826158 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.826279 kubelet[2119]: E0913 00:54:18.826222 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" Sep 13 00:54:18.826279 kubelet[2119]: E0913 00:54:18.826241 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" Sep 13 00:54:18.826342 kubelet[2119]: E0913 00:54:18.826277 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-798cbcbbb6-8gg8t_calico-system(e8351b28-7c27-4f21-ad76-83e2e206ba63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-798cbcbbb6-8gg8t_calico-system(e8351b28-7c27-4f21-ad76-83e2e206ba63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" podUID="e8351b28-7c27-4f21-ad76-83e2e206ba63" Sep 13 00:54:18.827654 env[1303]: time="2025-09-13T00:54:18.827584926Z" level=error msg="Failed to destroy network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.827974 env[1303]: time="2025-09-13T00:54:18.827934143Z" level=error msg="encountered an error cleaning up failed sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.828033 env[1303]: time="2025-09-13T00:54:18.827992323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55cf57d69d-jj7b6,Uid:70415598-0c6b-4edf-8bd8-a17e4dfe2ca9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.828477 kubelet[2119]: E0913 00:54:18.828427 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.828556 kubelet[2119]: E0913 00:54:18.828511 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55cf57d69d-jj7b6" Sep 13 00:54:18.828556 kubelet[2119]: E0913 00:54:18.828534 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55cf57d69d-jj7b6" Sep 13 00:54:18.828628 kubelet[2119]: E0913 00:54:18.828588 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55cf57d69d-jj7b6_calico-system(70415598-0c6b-4edf-8bd8-a17e4dfe2ca9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55cf57d69d-jj7b6_calico-system(70415598-0c6b-4edf-8bd8-a17e4dfe2ca9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55cf57d69d-jj7b6" podUID="70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" Sep 13 00:54:18.829482 env[1303]: time="2025-09-13T00:54:18.829433592Z" level=error msg="Failed to destroy network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.829772 env[1303]: time="2025-09-13T00:54:18.829744908Z" level=error msg="encountered an error cleaning up failed sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.829841 env[1303]: time="2025-09-13T00:54:18.829784191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqtkm,Uid:b5d081d4-7d87-4234-8171-fc6646bb9f9b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.830010 kubelet[2119]: E0913 00:54:18.829982 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.830081 kubelet[2119]: E0913 00:54:18.830012 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dqtkm" Sep 13 00:54:18.830081 kubelet[2119]: E0913 00:54:18.830028 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dqtkm" Sep 13 00:54:18.830081 kubelet[2119]: E0913 00:54:18.830051 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dqtkm_kube-system(b5d081d4-7d87-4234-8171-fc6646bb9f9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dqtkm_kube-system(b5d081d4-7d87-4234-8171-fc6646bb9f9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dqtkm" podUID="b5d081d4-7d87-4234-8171-fc6646bb9f9b" Sep 13 00:54:18.838779 env[1303]: time="2025-09-13T00:54:18.838719424Z" level=error msg="Failed to destroy network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.839145 env[1303]: time="2025-09-13T00:54:18.839098767Z" level=error msg="encountered an error cleaning up failed sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.839186 env[1303]: time="2025-09-13T00:54:18.839154782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-r4qg4,Uid:20e33070-d374-477c-b056-d9ebed8bda5f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.839417 kubelet[2119]: E0913 00:54:18.839370 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.839478 kubelet[2119]: E0913 00:54:18.839432 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" Sep 13 00:54:18.839478 kubelet[2119]: E0913 00:54:18.839451 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" Sep 13 00:54:18.839549 kubelet[2119]: E0913 00:54:18.839492 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-966dc6bcb-r4qg4_calico-apiserver(20e33070-d374-477c-b056-d9ebed8bda5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-966dc6bcb-r4qg4_calico-apiserver(20e33070-d374-477c-b056-d9ebed8bda5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" podUID="20e33070-d374-477c-b056-d9ebed8bda5f" Sep 13 00:54:18.850887 env[1303]: time="2025-09-13T00:54:18.850839275Z" level=error msg="Failed to destroy network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.851440 env[1303]: time="2025-09-13T00:54:18.851380923Z" level=error msg="Failed to destroy network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.851580 env[1303]: time="2025-09-13T00:54:18.851528620Z" level=error msg="encountered an error cleaning up failed sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.851732 env[1303]: time="2025-09-13T00:54:18.851673332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-whpk5,Uid:d2542adf-ca6b-4757-9a4c-0ba349d6ae47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.851936 env[1303]: time="2025-09-13T00:54:18.851751360Z" level=error msg="encountered an error cleaning up failed sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.851936 env[1303]: time="2025-09-13T00:54:18.851802706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-g8gcj,Uid:4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.852014 kubelet[2119]: E0913 00:54:18.851951 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.852062 kubelet[2119]: E0913 00:54:18.852018 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" Sep 13 00:54:18.852062 kubelet[2119]: E0913 00:54:18.852045 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" Sep 13 00:54:18.852062 kubelet[2119]: E0913 00:54:18.851957 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.852138 kubelet[2119]: E0913 00:54:18.852083 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-966dc6bcb-g8gcj_calico-apiserver(4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-966dc6bcb-g8gcj_calico-apiserver(4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" podUID="4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c" Sep 13 00:54:18.852138 kubelet[2119]: E0913 00:54:18.852097 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.852138 kubelet[2119]: E0913 00:54:18.852124 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-whpk5" Sep 13 00:54:18.852240 kubelet[2119]: E0913 00:54:18.852162 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-whpk5_calico-system(d2542adf-ca6b-4757-9a4c-0ba349d6ae47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-whpk5_calico-system(d2542adf-ca6b-4757-9a4c-0ba349d6ae47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-whpk5" podUID="d2542adf-ca6b-4757-9a4c-0ba349d6ae47" Sep 13 00:54:19.041041 env[1303]: time="2025-09-13T00:54:19.040904700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzvl6,Uid:ad838603-c026-4e41-bf47-8168df866652,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:19.088263 env[1303]: time="2025-09-13T00:54:19.088197021Z" level=error msg="Failed to destroy network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.088616 env[1303]: time="2025-09-13T00:54:19.088584279Z" level=error msg="encountered an error cleaning up failed sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.088659 env[1303]: time="2025-09-13T00:54:19.088632059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzvl6,Uid:ad838603-c026-4e41-bf47-8168df866652,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.088919 kubelet[2119]: E0913 00:54:19.088872 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.088919 kubelet[2119]: E0913 00:54:19.088932 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:19.089108 kubelet[2119]: E0913 00:54:19.088950 2119 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzvl6" Sep 13 00:54:19.089108 kubelet[2119]: E0913 00:54:19.088992 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzvl6_calico-system(ad838603-c026-4e41-bf47-8168df866652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzvl6_calico-system(ad838603-c026-4e41-bf47-8168df866652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:19.112147 kubelet[2119]: I0913 00:54:19.112102 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:19.113384 env[1303]: time="2025-09-13T00:54:19.112787345Z" level=info msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" Sep 13 00:54:19.114175 kubelet[2119]: I0913 00:54:19.113120 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:19.114175 kubelet[2119]: I0913 00:54:19.114019 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:19.115130 env[1303]: time="2025-09-13T00:54:19.115088141Z" level=info msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" Sep 13 00:54:19.115557 env[1303]: time="2025-09-13T00:54:19.115481551Z" level=info msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" Sep 13 00:54:19.120326 env[1303]: time="2025-09-13T00:54:19.120289471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:54:19.121895 kubelet[2119]: I0913 00:54:19.121864 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:19.122669 env[1303]: time="2025-09-13T00:54:19.122633668Z" level=info msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" Sep 13 00:54:19.123606 kubelet[2119]: I0913 00:54:19.123588 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:19.123941 env[1303]: time="2025-09-13T00:54:19.123907063Z" level=info msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" Sep 13 00:54:19.124826 kubelet[2119]: I0913 00:54:19.124805 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:19.125180 env[1303]: time="2025-09-13T00:54:19.125149640Z" level=info msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" Sep 13 00:54:19.126272 kubelet[2119]: I0913 00:54:19.126240 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:19.126837 env[1303]: time="2025-09-13T00:54:19.126804140Z" level=info msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" Sep 13 00:54:19.128129 kubelet[2119]: I0913 00:54:19.128108 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:19.128708 env[1303]: time="2025-09-13T00:54:19.128687181Z" level=info msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" Sep 13 00:54:19.155798 env[1303]: time="2025-09-13T00:54:19.155721822Z" level=error msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" failed" error="failed to destroy network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.156328 kubelet[2119]: E0913 00:54:19.156275 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:19.156399 kubelet[2119]: E0913 00:54:19.156336 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35"} Sep 13 00:54:19.156399 kubelet[2119]: E0913 00:54:19.156393 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8351b28-7c27-4f21-ad76-83e2e206ba63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.156505 kubelet[2119]: E0913 00:54:19.156417 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8351b28-7c27-4f21-ad76-83e2e206ba63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" podUID="e8351b28-7c27-4f21-ad76-83e2e206ba63" Sep 13 00:54:19.179323 env[1303]: time="2025-09-13T00:54:19.179260380Z" level=error msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" failed" error="failed to destroy network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.179940 kubelet[2119]: E0913 00:54:19.179795 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:19.179940 kubelet[2119]: E0913 00:54:19.179840 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc"} Sep 13 00:54:19.179940 kubelet[2119]: E0913 00:54:19.179883 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.179940 kubelet[2119]: E0913 00:54:19.179905 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2542adf-ca6b-4757-9a4c-0ba349d6ae47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-whpk5" podUID="d2542adf-ca6b-4757-9a4c-0ba349d6ae47" Sep 13 00:54:19.186072 env[1303]: time="2025-09-13T00:54:19.186011543Z" level=error msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" failed" error="failed to destroy network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.186464 kubelet[2119]: E0913 00:54:19.186430 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:19.186464 kubelet[2119]: E0913 00:54:19.186463 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0"} Sep 13 00:54:19.186617 kubelet[2119]: E0913 00:54:19.186484 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29726fd2-0f28-42d0-a860-baf11550e993\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.186617 kubelet[2119]: E0913 00:54:19.186509 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29726fd2-0f28-42d0-a860-baf11550e993\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xqk2c" podUID="29726fd2-0f28-42d0-a860-baf11550e993" Sep 13 00:54:19.200862 env[1303]: time="2025-09-13T00:54:19.200803264Z" level=error msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" failed" error="failed to destroy network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.201085 kubelet[2119]: E0913 00:54:19.201029 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:19.201085 kubelet[2119]: E0913 00:54:19.201072 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a"} Sep 13 00:54:19.201177 kubelet[2119]: E0913 00:54:19.201097 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5d081d4-7d87-4234-8171-fc6646bb9f9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.201177 kubelet[2119]: E0913 00:54:19.201114 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5d081d4-7d87-4234-8171-fc6646bb9f9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dqtkm" podUID="b5d081d4-7d87-4234-8171-fc6646bb9f9b" Sep 13 00:54:19.209480 env[1303]: time="2025-09-13T00:54:19.209411570Z" level=error msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" failed" error="failed to destroy network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.209783 env[1303]: time="2025-09-13T00:54:19.209426138Z" level=error msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" failed" error="failed to destroy network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.210159 kubelet[2119]: E0913 00:54:19.209936 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:19.210159 kubelet[2119]: E0913 00:54:19.209993 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6"} Sep 13 00:54:19.210159 kubelet[2119]: E0913 00:54:19.210028 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20e33070-d374-477c-b056-d9ebed8bda5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.210159 kubelet[2119]: E0913 00:54:19.210033 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:19.210363 kubelet[2119]: E0913 00:54:19.210051 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20e33070-d374-477c-b056-d9ebed8bda5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" podUID="20e33070-d374-477c-b056-d9ebed8bda5f" Sep 13 00:54:19.210363 kubelet[2119]: E0913 00:54:19.210071 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb"} Sep 13 00:54:19.210363 kubelet[2119]: E0913 00:54:19.210112 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad838603-c026-4e41-bf47-8168df866652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.210363 kubelet[2119]: E0913 00:54:19.210130 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad838603-c026-4e41-bf47-8168df866652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzvl6" podUID="ad838603-c026-4e41-bf47-8168df866652" Sep 13 00:54:19.212900 env[1303]: time="2025-09-13T00:54:19.212836750Z" level=error msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" failed" error="failed to destroy network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.213208 kubelet[2119]: E0913 00:54:19.213099 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:19.213208 kubelet[2119]: E0913 00:54:19.213146 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1"} Sep 13 00:54:19.213208 kubelet[2119]: E0913 00:54:19.213164 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.213208 kubelet[2119]: E0913 00:54:19.213179 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" podUID="4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c" Sep 13 00:54:19.214643 env[1303]: time="2025-09-13T00:54:19.214607210Z" level=error msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" failed" error="failed to destroy network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:19.214791 kubelet[2119]: E0913 00:54:19.214759 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:19.214841 kubelet[2119]: E0913 00:54:19.214792 2119 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5"} Sep 13 00:54:19.214841 kubelet[2119]: E0913 00:54:19.214812 2119 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:19.214918 kubelet[2119]: E0913 00:54:19.214866 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55cf57d69d-jj7b6" podUID="70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" Sep 13 00:54:19.281804 kubelet[2119]: I0913 00:54:19.281749 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:19.282203 kubelet[2119]: E0913 00:54:19.282181 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:19.309474 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:54:19.309638 kernel: audit: type=1325 audit(1757724859.304:291): table=filter:97 family=2 entries=21 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:19.304000 audit[3369]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:19.304000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd06a2f850 a2=0 a3=7ffd06a2f83c items=0 ppid=2266 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:19.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:19.317524 kernel: audit: type=1300 audit(1757724859.304:291): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd06a2f850 a2=0 a3=7ffd06a2f83c items=0 ppid=2266 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:19.317727 kernel: audit: type=1327 audit(1757724859.304:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:19.316000 audit[3369]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:19.320780 kernel: audit: type=1325 audit(1757724859.316:292): table=nat:98 family=2 entries=19 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:19.320837 kernel: audit: type=1300 audit(1757724859.316:292): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd06a2f850 a2=0 a3=7ffd06a2f83c items=0 ppid=2266 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:19.316000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd06a2f850 a2=0 a3=7ffd06a2f83c items=0 ppid=2266 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:19.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:19.327482 kernel: audit: type=1327 audit(1757724859.316:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.130169 kubelet[2119]: E0913 00:54:20.130137 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:25.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.131:22-10.0.0.1:35088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:25.067643 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:35088.service. Sep 13 00:54:25.071596 kernel: audit: type=1130 audit(1757724865.066:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.131:22-10.0.0.1:35088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:25.116000 audit[3371]: USER_ACCT pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.118209 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 35088 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:25.120000 audit[3371]: CRED_ACQ pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.122233 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:25.125407 kernel: audit: type=1101 audit(1757724865.116:294): pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.125510 kernel: audit: type=1103 audit(1757724865.120:295): pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.125533 kernel: audit: type=1006 audit(1757724865.120:296): pid=3371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 13 00:54:25.127250 systemd[1]: Started session-8.scope. Sep 13 00:54:25.131400 kernel: audit: type=1300 audit(1757724865.120:296): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc70ae0640 a2=3 a3=0 items=0 ppid=1 pid=3371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:25.120000 audit[3371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc70ae0640 a2=3 a3=0 items=0 ppid=1 pid=3371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:25.127590 systemd-logind[1289]: New session 8 of user core. Sep 13 00:54:25.120000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:25.131000 audit[3371]: USER_START pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.137777 kernel: audit: type=1327 audit(1757724865.120:296): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:25.137844 kernel: audit: type=1105 audit(1757724865.131:297): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.137866 kernel: audit: type=1103 audit(1757724865.132:298): pid=3374 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.132000 audit[3374]: CRED_ACQ pid=3374 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.258823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382606658.mount: Deactivated successfully. Sep 13 00:54:25.261194 sshd[3371]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:25.270093 kernel: audit: type=1106 audit(1757724865.260:299): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.270150 kernel: audit: type=1104 audit(1757724865.260:300): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.260000 audit[3371]: USER_END pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.260000 audit[3371]: CRED_DISP pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:25.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.131:22-10.0.0.1:35088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:25.263753 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:35088.service: Deactivated successfully. Sep 13 00:54:25.264346 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:54:25.272419 systemd-logind[1289]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:54:25.274436 systemd-logind[1289]: Removed session 8. Sep 13 00:54:26.785204 env[1303]: time="2025-09-13T00:54:26.785134506Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:26.867243 env[1303]: time="2025-09-13T00:54:26.867180502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:26.972992 env[1303]: time="2025-09-13T00:54:26.972932165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:26.977213 env[1303]: time="2025-09-13T00:54:26.977142887Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:26.977614 env[1303]: time="2025-09-13T00:54:26.977584487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:54:26.986454 env[1303]: time="2025-09-13T00:54:26.986407017Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:54:27.027737 env[1303]: time="2025-09-13T00:54:27.027684480Z" level=info msg="CreateContainer within sandbox \"229b68a6b87fef40fda2e8f837035196ebc79c696128ac71122410471a5a7c00\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"da2128f64d786db12eb0b474b202971ce4ad11f6ec469ed4cfcd360a56affb4b\"" Sep 13 00:54:27.029288 env[1303]: time="2025-09-13T00:54:27.028115890Z" level=info msg="StartContainer for \"da2128f64d786db12eb0b474b202971ce4ad11f6ec469ed4cfcd360a56affb4b\"" Sep 13 00:54:27.079130 env[1303]: time="2025-09-13T00:54:27.079003319Z" level=info msg="StartContainer for \"da2128f64d786db12eb0b474b202971ce4ad11f6ec469ed4cfcd360a56affb4b\" returns successfully" Sep 13 00:54:27.162506 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:54:27.162752 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:54:27.162788 kubelet[2119]: I0913 00:54:27.162136 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zn578" podStartSLOduration=1.436537721 podStartE2EDuration="19.162116801s" podCreationTimestamp="2025-09-13 00:54:08 +0000 UTC" firstStartedPulling="2025-09-13 00:54:09.253124781 +0000 UTC m=+17.299664421" lastFinishedPulling="2025-09-13 00:54:26.978703861 +0000 UTC m=+35.025243501" observedRunningTime="2025-09-13 00:54:27.161406777 +0000 UTC m=+35.207946437" watchObservedRunningTime="2025-09-13 00:54:27.162116801 +0000 UTC m=+35.208656441" Sep 13 00:54:27.989077 systemd[1]: run-containerd-runc-k8s.io-da2128f64d786db12eb0b474b202971ce4ad11f6ec469ed4cfcd360a56affb4b-runc.Pg6tzl.mount: Deactivated successfully. Sep 13 00:54:28.148551 kubelet[2119]: I0913 00:54:28.148508 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:28.446391 env[1303]: time="2025-09-13T00:54:28.446321154Z" level=info msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.510 [INFO][3447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.510 [INFO][3447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" iface="eth0" netns="/var/run/netns/cni-9751b94f-67fa-788e-e7a5-a6c7081e603e" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.511 [INFO][3447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" iface="eth0" netns="/var/run/netns/cni-9751b94f-67fa-788e-e7a5-a6c7081e603e" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.511 [INFO][3447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" iface="eth0" netns="/var/run/netns/cni-9751b94f-67fa-788e-e7a5-a6c7081e603e" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.511 [INFO][3447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.511 [INFO][3447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.572 [INFO][3462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.573 [INFO][3462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.573 [INFO][3462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.578 [WARNING][3462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.578 [INFO][3462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.579 [INFO][3462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:28.582663 env[1303]: 2025-09-13 00:54:28.581 [INFO][3447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:28.584068 env[1303]: time="2025-09-13T00:54:28.582823049Z" level=info msg="TearDown network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" successfully" Sep 13 00:54:28.584068 env[1303]: time="2025-09-13T00:54:28.582855581Z" level=info msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" returns successfully" Sep 13 00:54:28.585249 systemd[1]: run-netns-cni\x2d9751b94f\x2d67fa\x2d788e\x2de7a5\x2da6c7081e603e.mount: Deactivated successfully. Sep 13 00:54:28.772014 kubelet[2119]: I0913 00:54:28.771854 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-ca-bundle\") pod \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " Sep 13 00:54:28.772014 kubelet[2119]: I0913 00:54:28.771904 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584pk\" (UniqueName: \"kubernetes.io/projected/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-kube-api-access-584pk\") pod \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " Sep 13 00:54:28.772014 kubelet[2119]: I0913 00:54:28.771927 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-backend-key-pair\") pod \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\" (UID: \"70415598-0c6b-4edf-8bd8-a17e4dfe2ca9\") " Sep 13 00:54:28.772502 kubelet[2119]: I0913 00:54:28.772401 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" (UID: "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:28.775059 kubelet[2119]: I0913 00:54:28.774995 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" (UID: "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:28.775219 kubelet[2119]: I0913 00:54:28.775137 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-kube-api-access-584pk" (OuterVolumeSpecName: "kube-api-access-584pk") pod "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" (UID: "70415598-0c6b-4edf-8bd8-a17e4dfe2ca9"). InnerVolumeSpecName "kube-api-access-584pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:28.777087 systemd[1]: var-lib-kubelet-pods-70415598\x2d0c6b\x2d4edf\x2d8bd8\x2da17e4dfe2ca9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d584pk.mount: Deactivated successfully. Sep 13 00:54:28.777278 systemd[1]: var-lib-kubelet-pods-70415598\x2d0c6b\x2d4edf\x2d8bd8\x2da17e4dfe2ca9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:54:28.872216 kubelet[2119]: I0913 00:54:28.872151 2119 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:28.872216 kubelet[2119]: I0913 00:54:28.872194 2119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584pk\" (UniqueName: \"kubernetes.io/projected/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-kube-api-access-584pk\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:28.872216 kubelet[2119]: I0913 00:54:28.872202 2119 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:54:29.375126 kubelet[2119]: I0913 00:54:29.375061 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f049d712-11f3-4746-ae35-07a865920bf1-whisker-ca-bundle\") pod \"whisker-874dd5657-lw978\" (UID: \"f049d712-11f3-4746-ae35-07a865920bf1\") " pod="calico-system/whisker-874dd5657-lw978" Sep 13 00:54:29.375126 kubelet[2119]: I0913 00:54:29.375108 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f049d712-11f3-4746-ae35-07a865920bf1-whisker-backend-key-pair\") pod \"whisker-874dd5657-lw978\" (UID: \"f049d712-11f3-4746-ae35-07a865920bf1\") " pod="calico-system/whisker-874dd5657-lw978" Sep 13 00:54:29.375126 kubelet[2119]: I0913 00:54:29.375126 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5rx\" (UniqueName: \"kubernetes.io/projected/f049d712-11f3-4746-ae35-07a865920bf1-kube-api-access-jq5rx\") pod \"whisker-874dd5657-lw978\" (UID: \"f049d712-11f3-4746-ae35-07a865920bf1\") " pod="calico-system/whisker-874dd5657-lw978" Sep 13 00:54:29.498615 env[1303]: time="2025-09-13T00:54:29.498540725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-874dd5657-lw978,Uid:f049d712-11f3-4746-ae35-07a865920bf1,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:29.640747 systemd-networkd[1078]: califa3c7dc410c: Link UP Sep 13 00:54:29.643920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:29.643990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califa3c7dc410c: link becomes ready Sep 13 00:54:29.644140 systemd-networkd[1078]: califa3c7dc410c: Gained carrier Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.533 [INFO][3485] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.544 [INFO][3485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--874dd5657--lw978-eth0 whisker-874dd5657- calico-system f049d712-11f3-4746-ae35-07a865920bf1 964 0 2025-09-13 00:54:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:874dd5657 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-874dd5657-lw978 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califa3c7dc410c [] [] }} ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.544 [INFO][3485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.565 [INFO][3499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" HandleID="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Workload="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.565 [INFO][3499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" HandleID="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Workload="localhost-k8s-whisker--874dd5657--lw978-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-874dd5657-lw978", "timestamp":"2025-09-13 00:54:29.565727716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.565 [INFO][3499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.565 [INFO][3499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.566 [INFO][3499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.574 [INFO][3499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.579 [INFO][3499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.583 [INFO][3499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.584 [INFO][3499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.586 [INFO][3499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.586 [INFO][3499] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.587 [INFO][3499] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1 Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.622 [INFO][3499] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.630 [INFO][3499] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.630 [INFO][3499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" host="localhost" Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.630 [INFO][3499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:29.658714 env[1303]: 2025-09-13 00:54:29.630 [INFO][3499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" HandleID="k8s-pod-network.ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Workload="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.632 [INFO][3485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--874dd5657--lw978-eth0", GenerateName:"whisker-874dd5657-", Namespace:"calico-system", SelfLink:"", UID:"f049d712-11f3-4746-ae35-07a865920bf1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"874dd5657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-874dd5657-lw978", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa3c7dc410c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.632 [INFO][3485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.632 [INFO][3485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa3c7dc410c ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.645 [INFO][3485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.646 [INFO][3485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--874dd5657--lw978-eth0", GenerateName:"whisker-874dd5657-", Namespace:"calico-system", SelfLink:"", UID:"f049d712-11f3-4746-ae35-07a865920bf1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"874dd5657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1", Pod:"whisker-874dd5657-lw978", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa3c7dc410c", MAC:"fa:46:d0:e8:35:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:29.659336 env[1303]: 2025-09-13 00:54:29.656 [INFO][3485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1" Namespace="calico-system" Pod="whisker-874dd5657-lw978" WorkloadEndpoint="localhost-k8s-whisker--874dd5657--lw978-eth0" Sep 13 00:54:29.680000 audit[3589]: AVC avc: denied { write } for pid=3589 comm="tee" name="fd" dev="proc" ino=24403 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.680000 audit[3589]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe455167e7 a2=241 a3=1b6 items=1 ppid=3531 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.680000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:54:29.680000 audit: PATH item=0 name="/dev/fd/63" inode=22293 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.680000 audit[3596]: AVC avc: denied { write } for pid=3596 comm="tee" name="fd" dev="proc" ino=25643 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.680000 audit[3596]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd260077d6 a2=241 a3=1b6 items=1 ppid=3519 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.680000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:54:29.680000 audit: PATH item=0 name="/dev/fd/63" inode=24400 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.683294 env[1303]: time="2025-09-13T00:54:29.683216526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:29.683417 env[1303]: time="2025-09-13T00:54:29.683392586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:29.683521 env[1303]: time="2025-09-13T00:54:29.683494318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:29.683795 env[1303]: time="2025-09-13T00:54:29.683765126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1 pid=3588 runtime=io.containerd.runc.v2 Sep 13 00:54:29.689000 audit[3569]: AVC avc: denied { write } for pid=3569 comm="tee" name="fd" dev="proc" ino=25647 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.689000 audit[3569]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf78997e5 a2=241 a3=1b6 items=1 ppid=3522 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.689000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:54:29.689000 audit: PATH item=0 name="/dev/fd/63" inode=22290 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.693000 audit[3585]: AVC avc: denied { write } for pid=3585 comm="tee" name="fd" dev="proc" ino=25651 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.693000 audit[3585]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb9cec7d5 a2=241 a3=1b6 items=1 ppid=3518 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.693000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:54:29.693000 audit: PATH item=0 name="/dev/fd/63" inode=22292 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.706000 audit[3575]: AVC avc: denied { write } for pid=3575 comm="tee" name="fd" dev="proc" ino=24412 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.706000 audit[3575]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc62547e5 a2=241 a3=1b6 items=1 ppid=3529 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.706000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:54:29.706000 audit: PATH item=0 name="/dev/fd/63" inode=25065 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.707000 audit[3581]: AVC avc: denied { write } for pid=3581 comm="tee" name="fd" dev="proc" ino=24416 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.707000 audit[3581]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffaeef37e5 a2=241 a3=1b6 items=1 ppid=3524 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.707000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:54:29.707000 audit: PATH item=0 name="/dev/fd/63" inode=25068 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.715000 audit[3605]: AVC avc: denied { write } for pid=3605 comm="tee" name="fd" dev="proc" ino=25656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:29.715000 audit[3605]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdc2cbc7e6 a2=241 a3=1b6 items=1 ppid=3516 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.715000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:54:29.715000 audit: PATH item=0 name="/dev/fd/63" inode=25076 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:29.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:29.724383 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:29.773096 env[1303]: time="2025-09-13T00:54:29.773040137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-874dd5657-lw978,Uid:f049d712-11f3-4746-ae35-07a865920bf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1\"" Sep 13 00:54:29.775508 env[1303]: time="2025-09-13T00:54:29.775476054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit: BPF prog-id=10 op=LOAD Sep 13 00:54:29.854000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc212ce790 a2=98 a3=1fffffffffffffff items=0 ppid=3530 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.854000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:29.854000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit: BPF prog-id=11 op=LOAD Sep 13 00:54:29.854000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc212ce670 a2=94 a3=3 items=0 ppid=3530 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.854000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:29.854000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit: BPF prog-id=12 op=LOAD Sep 13 00:54:29.854000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc212ce6b0 a2=94 a3=7ffc212ce890 items=0 ppid=3530 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.854000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:29.854000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:54:29.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.854000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc212ce780 a2=50 a3=a000000085 items=0 ppid=3530 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.854000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit: BPF prog-id=13 op=LOAD Sep 13 00:54:29.856000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf76c1bb0 a2=98 a3=3 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.856000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.856000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit: BPF prog-id=14 op=LOAD Sep 13 00:54:29.856000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf76c19a0 a2=94 a3=54428f items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.856000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.856000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.856000 audit: BPF prog-id=15 op=LOAD Sep 13 00:54:29.856000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf76c19d0 a2=94 a3=2 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.856000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.856000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit: BPF prog-id=16 op=LOAD Sep 13 00:54:29.961000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf76c1890 a2=94 a3=1 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.961000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.961000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:54:29.961000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.961000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcf76c1960 a2=50 a3=7ffcf76c1a40 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.961000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c18a0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf76c18d0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf76c17e0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c18f0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c18d0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c18c0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c18f0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf76c18d0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf76c18f0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf76c18c0 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.969000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.969000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf76c1930 a2=28 a3=0 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcf76c16e0 a2=50 a3=1 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit: BPF prog-id=17 op=LOAD Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf76c16e0 a2=94 a3=5 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcf76c1790 a2=50 a3=1 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcf76c18b0 a2=4 a3=38 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf76c1900 a2=94 a3=6 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf76c10b0 a2=94 a3=88 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.970000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:29.970000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf76c10b0 a2=94 a3=88 items=0 ppid=3530 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit: BPF prog-id=18 op=LOAD Sep 13 00:54:29.978000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa43e450 a2=98 a3=1999999999999999 items=0 ppid=3530 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:29.978000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit: BPF prog-id=19 op=LOAD Sep 13 00:54:29.978000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa43e330 a2=94 a3=ffff items=0 ppid=3530 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:29.978000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:29.978000 audit: BPF prog-id=20 op=LOAD Sep 13 00:54:29.978000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa43e370 a2=94 a3=7ffefa43e550 items=0 ppid=3530 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:29.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:29.978000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:54:30.029754 systemd-networkd[1078]: vxlan.calico: Link UP Sep 13 00:54:30.029765 systemd-networkd[1078]: vxlan.calico: Gained carrier Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit: BPF prog-id=21 op=LOAD Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe393aae70 a2=98 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit: BPF prog-id=22 op=LOAD Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe393aac80 a2=94 a3=54428f items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit: BPF prog-id=23 op=LOAD Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe393aacb0 a2=94 a3=2 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aab80 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe393aabb0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe393aaac0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aabd0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.041000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.041000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aabb0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aaba0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aabd0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe393aabb0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe393aabd0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe393aaba0 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe393aac10 a2=28 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit: BPF prog-id=24 op=LOAD Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe393aaa80 a2=94 a3=0 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.042000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:54:30.042000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.042000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe393aaa70 a2=50 a3=2800 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe393aaa70 a2=50 a3=2800 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.043000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit: BPF prog-id=25 op=LOAD Sep 13 00:54:30.043000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe393aa290 a2=94 a3=2 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.043000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.043000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { perfmon } for pid=3707 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit[3707]: AVC avc: denied { bpf } for pid=3707 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.043000 audit: BPF prog-id=26 op=LOAD Sep 13 00:54:30.043000 audit[3707]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe393aa390 a2=94 a3=30 items=0 ppid=3530 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.043000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:30.047336 env[1303]: time="2025-09-13T00:54:30.043258101Z" level=info msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" Sep 13 00:54:30.047383 kubelet[2119]: I0913 00:54:30.044634 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70415598-0c6b-4edf-8bd8-a17e4dfe2ca9" path="/var/lib/kubelet/pods/70415598-0c6b-4edf-8bd8-a17e4dfe2ca9/volumes" Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit: BPF prog-id=27 op=LOAD Sep 13 00:54:30.046000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1503c2c0 a2=98 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.046000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.046000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit: BPF prog-id=28 op=LOAD Sep 13 00:54:30.046000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1503c0b0 a2=94 a3=54428f items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.046000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.046000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.046000 audit: BPF prog-id=29 op=LOAD Sep 13 00:54:30.046000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1503c0e0 a2=94 a3=2 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.046000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.046000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.096 [INFO][3729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.097 [INFO][3729] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" iface="eth0" netns="/var/run/netns/cni-4fd0243f-3514-b2ef-4cd1-fc4267e05ba9" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.097 [INFO][3729] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" iface="eth0" netns="/var/run/netns/cni-4fd0243f-3514-b2ef-4cd1-fc4267e05ba9" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.097 [INFO][3729] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" iface="eth0" netns="/var/run/netns/cni-4fd0243f-3514-b2ef-4cd1-fc4267e05ba9" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.097 [INFO][3729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.097 [INFO][3729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.119 [INFO][3738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.119 [INFO][3738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.119 [INFO][3738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.124 [WARNING][3738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.124 [INFO][3738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.125 [INFO][3738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.129487 env[1303]: 2025-09-13 00:54:30.127 [INFO][3729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:30.130409 env[1303]: time="2025-09-13T00:54:30.130362175Z" level=info msg="TearDown network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" successfully" Sep 13 00:54:30.130409 env[1303]: time="2025-09-13T00:54:30.130404465Z" level=info msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" returns successfully" Sep 13 00:54:30.131065 env[1303]: time="2025-09-13T00:54:30.131036943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-g8gcj,Uid:4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:30.181178 kernel: kauditd_printk_skb: 423 callbacks suppressed Sep 13 00:54:30.181329 kernel: audit: type=1400 audit(1757724870.165:380): avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.181355 kernel: audit: type=1400 audit(1757724870.165:380): avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.181373 kernel: audit: type=1400 audit(1757724870.165:380): avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.181390 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Sep 13 00:54:30.181410 kernel: audit: type=1400 audit(1757724870.165:380): avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.181426 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Sep 13 00:54:30.181448 kernel: audit: backlog limit exceeded Sep 13 00:54:30.181466 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.182780 kernel: audit: type=1400 audit(1757724870.165:380): avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.182813 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.165000 audit: BPF prog-id=30 op=LOAD Sep 13 00:54:30.165000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1503bfa0 a2=94 a3=1 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.165000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.166000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:54:30.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.166000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc1503c070 a2=50 a3=7ffc1503c150 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503bfb0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1503bfe0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1503bef0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503c000 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503bfe0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503bfd0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503c000 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1503bfe0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1503c000 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1503bfd0 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1503c040 a2=28 a3=0 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc1503bdf0 a2=50 a3=1 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.174000 audit: BPF prog-id=31 op=LOAD Sep 13 00:54:30.174000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc1503bdf0 a2=94 a3=5 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc1503bea0 a2=50 a3=1 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc1503bfc0 a2=4 a3=38 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1503c010 a2=94 a3=6 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1503b7c0 a2=94 a3=88 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1503b7c0 a2=94 a3=88 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1503d1f0 a2=10 a3=208 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1503d090 a2=10 a3=3 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1503d030 a2=10 a3=3 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.186000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:30.186000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1503d030 a2=10 a3=7 items=0 ppid=3530 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:30.196000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:54:30.259000 audit[3788]: NETFILTER_CFG table=raw:99 family=2 entries=21 op=nft_register_chain pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:30.259000 audit[3788]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffdafacc10 a2=0 a3=7fffdafacbfc items=0 ppid=3530 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.259000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:30.260518 systemd-networkd[1078]: cali03cc84b9fd4: Link UP Sep 13 00:54:30.273518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali03cc84b9fd4: link becomes ready Sep 13 00:54:30.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.131:22-10.0.0.1:50186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:30.268000 audit[3791]: NETFILTER_CFG table=mangle:100 family=2 entries=16 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:30.268000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffec3518390 a2=0 a3=7ffec351837c items=0 ppid=3530 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.268000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:30.263943 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:50186.service. Sep 13 00:54:30.274651 systemd-networkd[1078]: cali03cc84b9fd4: Gained carrier Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.197 [INFO][3747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0 calico-apiserver-966dc6bcb- calico-apiserver 4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c 973 0 2025-09-13 00:54:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:966dc6bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-966dc6bcb-g8gcj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali03cc84b9fd4 [] [] }} ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.198 [INFO][3747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.226 [INFO][3764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" HandleID="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.226 [INFO][3764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" HandleID="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-966dc6bcb-g8gcj", "timestamp":"2025-09-13 00:54:30.226189257 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.226 [INFO][3764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.226 [INFO][3764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.226 [INFO][3764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.235 [INFO][3764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.239 [INFO][3764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.243 [INFO][3764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.244 [INFO][3764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.247 [INFO][3764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.247 [INFO][3764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.249 [INFO][3764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.253 [INFO][3764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.256 [INFO][3764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.257 [INFO][3764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" host="localhost" Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.257 [INFO][3764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.277964 env[1303]: 2025-09-13 00:54:30.257 [INFO][3764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" HandleID="k8s-pod-network.7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.259 [INFO][3747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-966dc6bcb-g8gcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03cc84b9fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.259 [INFO][3747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.259 [INFO][3747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03cc84b9fd4 ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.260 [INFO][3747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.261 [INFO][3747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b", Pod:"calico-apiserver-966dc6bcb-g8gcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03cc84b9fd4", MAC:"a2:59:3a:24:07:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.278510 env[1303]: 2025-09-13 00:54:30.271 [INFO][3747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-g8gcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:30.277000 audit[3790]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:30.277000 audit[3790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffffc2a12a0 a2=0 a3=7ffffc2a128c items=0 ppid=3530 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:30.291323 env[1303]: time="2025-09-13T00:54:30.291229158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:30.292452 env[1303]: time="2025-09-13T00:54:30.291436478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:30.292452 env[1303]: time="2025-09-13T00:54:30.291472515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:30.292452 env[1303]: time="2025-09-13T00:54:30.291781706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b pid=3818 runtime=io.containerd.runc.v2 Sep 13 00:54:30.289000 audit[3793]: NETFILTER_CFG table=filter:102 family=2 entries=94 op=nft_register_chain pid=3793 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:30.289000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe74169d40 a2=0 a3=7ffe74169d2c items=0 ppid=3530 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.289000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:30.313000 audit[3797]: USER_ACCT pid=3797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.314000 audit[3797]: CRED_ACQ pid=3797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.316209 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:30.314000 audit[3797]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51f33b30 a2=3 a3=0 items=0 ppid=1 pid=3797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:30.317227 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:30.321543 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:30.326309 systemd[1]: Started session-9.scope. Sep 13 00:54:30.326747 systemd-logind[1289]: New session 9 of user core. Sep 13 00:54:30.330000 audit[3797]: USER_START pid=3797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.331000 audit[3854]: CRED_ACQ pid=3854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.337000 audit[3853]: NETFILTER_CFG table=filter:103 family=2 entries=50 op=nft_register_chain pid=3853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:30.337000 audit[3853]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc584d5fe0 a2=0 a3=7ffc584d5fcc items=0 ppid=3530 pid=3853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:30.337000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:30.348406 env[1303]: time="2025-09-13T00:54:30.348370300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-g8gcj,Uid:4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b\"" Sep 13 00:54:30.448780 sshd[3797]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:30.448000 audit[3797]: USER_END pid=3797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.448000 audit[3797]: CRED_DISP pid=3797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:30.451298 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:50186.service: Deactivated successfully. Sep 13 00:54:30.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.131:22-10.0.0.1:50186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:30.452144 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:54:30.452863 systemd-logind[1289]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:54:30.453630 systemd-logind[1289]: Removed session 9. Sep 13 00:54:30.482420 systemd[1]: run-netns-cni\x2d4fd0243f\x2d3514\x2db2ef\x2d4cd1\x2dfc4267e05ba9.mount: Deactivated successfully. Sep 13 00:54:31.040336 env[1303]: time="2025-09-13T00:54:31.040275629Z" level=info msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" Sep 13 00:54:31.041450 env[1303]: time="2025-09-13T00:54:31.040422755Z" level=info msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.165 [INFO][3897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.165 [INFO][3897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" iface="eth0" netns="/var/run/netns/cni-c930afb3-2c61-456f-80eb-5c512a2afa12" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.166 [INFO][3897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" iface="eth0" netns="/var/run/netns/cni-c930afb3-2c61-456f-80eb-5c512a2afa12" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.166 [INFO][3897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" iface="eth0" netns="/var/run/netns/cni-c930afb3-2c61-456f-80eb-5c512a2afa12" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.166 [INFO][3897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.166 [INFO][3897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.198 [INFO][3913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.198 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.198 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.202 [WARNING][3913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.202 [INFO][3913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.203 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.206629 env[1303]: 2025-09-13 00:54:31.205 [INFO][3897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:31.212467 env[1303]: time="2025-09-13T00:54:31.209180056Z" level=info msg="TearDown network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" successfully" Sep 13 00:54:31.212467 env[1303]: time="2025-09-13T00:54:31.209211265Z" level=info msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" returns successfully" Sep 13 00:54:31.212467 env[1303]: time="2025-09-13T00:54:31.209868961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzvl6,Uid:ad838603-c026-4e41-bf47-8168df866652,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:31.211254 systemd[1]: run-netns-cni\x2dc930afb3\x2d2c61\x2d456f\x2d80eb\x2d5c512a2afa12.mount: Deactivated successfully. Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" iface="eth0" netns="/var/run/netns/cni-bb6e0b15-d83e-0522-841c-3f6dc53c3b4c" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" iface="eth0" netns="/var/run/netns/cni-bb6e0b15-d83e-0522-841c-3f6dc53c3b4c" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" iface="eth0" netns="/var/run/netns/cni-bb6e0b15-d83e-0522-841c-3f6dc53c3b4c" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.172 [INFO][3898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.206 [INFO][3919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.206 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.207 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.212 [WARNING][3919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.212 [INFO][3919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.213 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.217043 env[1303]: 2025-09-13 00:54:31.215 [INFO][3898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:31.217572 env[1303]: time="2025-09-13T00:54:31.217532688Z" level=info msg="TearDown network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" successfully" Sep 13 00:54:31.217659 env[1303]: time="2025-09-13T00:54:31.217637154Z" level=info msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" returns successfully" Sep 13 00:54:31.220085 systemd[1]: run-netns-cni\x2dbb6e0b15\x2dd83e\x2d0522\x2d841c\x2d3f6dc53c3b4c.mount: Deactivated successfully. Sep 13 00:54:31.221140 env[1303]: time="2025-09-13T00:54:31.220127984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cbcbbb6-8gg8t,Uid:e8351b28-7c27-4f21-ad76-83e2e206ba63,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:31.338324 env[1303]: time="2025-09-13T00:54:31.338128602Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:31.340631 env[1303]: time="2025-09-13T00:54:31.340600456Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:31.344341 env[1303]: time="2025-09-13T00:54:31.344301710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:31.345177 systemd-networkd[1078]: calid7cf627950f: Link UP Sep 13 00:54:31.346313 env[1303]: time="2025-09-13T00:54:31.346277311Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:31.347523 env[1303]: time="2025-09-13T00:54:31.347098293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:54:31.347870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:31.347933 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid7cf627950f: link becomes ready Sep 13 00:54:31.347958 systemd-networkd[1078]: calid7cf627950f: Gained carrier Sep 13 00:54:31.356042 env[1303]: time="2025-09-13T00:54:31.355990628Z" level=info msg="CreateContainer within sandbox \"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:54:31.357006 env[1303]: time="2025-09-13T00:54:31.356986399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.278 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wzvl6-eth0 csi-node-driver- calico-system ad838603-c026-4e41-bf47-8168df866652 986 0 2025-09-13 00:54:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wzvl6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid7cf627950f [] [] }} ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.278 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.304 [INFO][3957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" HandleID="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.305 [INFO][3957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" HandleID="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e75f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wzvl6", "timestamp":"2025-09-13 00:54:31.304857849 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.305 [INFO][3957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.305 [INFO][3957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.305 [INFO][3957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.310 [INFO][3957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.314 [INFO][3957] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.323 [INFO][3957] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.324 [INFO][3957] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.326 [INFO][3957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.326 [INFO][3957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.327 [INFO][3957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66 Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.331 [INFO][3957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.337 [INFO][3957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.337 [INFO][3957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" host="localhost" Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.337 [INFO][3957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.359456 env[1303]: 2025-09-13 00:54:31.337 [INFO][3957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" HandleID="k8s-pod-network.bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.341 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzvl6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad838603-c026-4e41-bf47-8168df866652", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wzvl6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7cf627950f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.341 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.341 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7cf627950f ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.348 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.349 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzvl6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad838603-c026-4e41-bf47-8168df866652", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66", Pod:"csi-node-driver-wzvl6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7cf627950f", MAC:"5e:e6:30:31:27:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.360015 env[1303]: 2025-09-13 00:54:31.357 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66" Namespace="calico-system" Pod="csi-node-driver-wzvl6" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:31.370000 audit[3983]: NETFILTER_CFG table=filter:104 family=2 entries=40 op=nft_register_chain pid=3983 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:31.370000 audit[3983]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffe301c5750 a2=0 a3=7ffe301c573c items=0 ppid=3530 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.370000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:31.375620 env[1303]: time="2025-09-13T00:54:31.375586772Z" level=info msg="CreateContainer within sandbox \"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5acfa67e385b2ac0fc4d99db1ccaf79208c323c03bad9113e7f0935c726d3c32\"" Sep 13 00:54:31.376345 env[1303]: time="2025-09-13T00:54:31.376326131Z" level=info msg="StartContainer for \"5acfa67e385b2ac0fc4d99db1ccaf79208c323c03bad9113e7f0935c726d3c32\"" Sep 13 00:54:31.378068 env[1303]: time="2025-09-13T00:54:31.378016056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:31.378152 env[1303]: time="2025-09-13T00:54:31.378052494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:31.378152 env[1303]: time="2025-09-13T00:54:31.378063495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:31.378408 env[1303]: time="2025-09-13T00:54:31.378305970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66 pid=3992 runtime=io.containerd.runc.v2 Sep 13 00:54:31.405487 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:31.424377 env[1303]: time="2025-09-13T00:54:31.424332497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzvl6,Uid:ad838603-c026-4e41-bf47-8168df866652,Namespace:calico-system,Attempt:1,} returns sandbox id \"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66\"" Sep 13 00:54:31.447722 systemd-networkd[1078]: cali5297e5d61b0: Link UP Sep 13 00:54:31.450685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5297e5d61b0: link becomes ready Sep 13 00:54:31.450105 systemd-networkd[1078]: cali5297e5d61b0: Gained carrier Sep 13 00:54:31.469782 env[1303]: time="2025-09-13T00:54:31.469722979Z" level=info msg="StartContainer for \"5acfa67e385b2ac0fc4d99db1ccaf79208c323c03bad9113e7f0935c726d3c32\" returns successfully" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.291 [INFO][3940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0 calico-kube-controllers-798cbcbbb6- calico-system e8351b28-7c27-4f21-ad76-83e2e206ba63 987 0 2025-09-13 00:54:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:798cbcbbb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-798cbcbbb6-8gg8t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5297e5d61b0 [] [] }} ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.291 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.319 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" HandleID="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.319 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" HandleID="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001354d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-798cbcbbb6-8gg8t", "timestamp":"2025-09-13 00:54:31.319194082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.319 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.337 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.337 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.413 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.420 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.425 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.430 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.433 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.433 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.434 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.438 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.443 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.443 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" host="localhost" Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.443 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.477447 env[1303]: 2025-09-13 00:54:31.443 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" HandleID="k8s-pod-network.19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.445 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0", GenerateName:"calico-kube-controllers-798cbcbbb6-", Namespace:"calico-system", SelfLink:"", UID:"e8351b28-7c27-4f21-ad76-83e2e206ba63", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cbcbbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-798cbcbbb6-8gg8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5297e5d61b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.445 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.446 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5297e5d61b0 ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.449 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.449 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0", GenerateName:"calico-kube-controllers-798cbcbbb6-", Namespace:"calico-system", SelfLink:"", UID:"e8351b28-7c27-4f21-ad76-83e2e206ba63", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cbcbbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f", Pod:"calico-kube-controllers-798cbcbbb6-8gg8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5297e5d61b0", MAC:"b2:8e:a6:47:7d:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.478286 env[1303]: 2025-09-13 00:54:31.470 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f" Namespace="calico-system" Pod="calico-kube-controllers-798cbcbbb6-8gg8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:31.487214 env[1303]: time="2025-09-13T00:54:31.487148655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:31.487406 env[1303]: time="2025-09-13T00:54:31.487191224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:31.487406 env[1303]: time="2025-09-13T00:54:31.487226782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:31.487532 env[1303]: time="2025-09-13T00:54:31.487456823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f pid=4073 runtime=io.containerd.runc.v2 Sep 13 00:54:31.491000 audit[4079]: NETFILTER_CFG table=filter:105 family=2 entries=44 op=nft_register_chain pid=4079 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:31.491000 audit[4079]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffeb7406160 a2=0 a3=7ffeb740614c items=0 ppid=3530 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:31.513067 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:31.533774 env[1303]: time="2025-09-13T00:54:31.533727328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798cbcbbb6-8gg8t,Uid:e8351b28-7c27-4f21-ad76-83e2e206ba63,Namespace:calico-system,Attempt:1,} returns sandbox id \"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f\"" Sep 13 00:54:31.661775 systemd-networkd[1078]: cali03cc84b9fd4: Gained IPv6LL Sep 13 00:54:31.662935 systemd-networkd[1078]: califa3c7dc410c: Gained IPv6LL Sep 13 00:54:31.726148 systemd-networkd[1078]: vxlan.calico: Gained IPv6LL Sep 13 00:54:32.040106 env[1303]: time="2025-09-13T00:54:32.039963172Z" level=info msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" Sep 13 00:54:32.040502 env[1303]: time="2025-09-13T00:54:32.039989621Z" level=info msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.088 [INFO][4139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.089 [INFO][4139] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" iface="eth0" netns="/var/run/netns/cni-63e3a769-8a08-1cb9-e241-464bcb254363" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.089 [INFO][4139] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" iface="eth0" netns="/var/run/netns/cni-63e3a769-8a08-1cb9-e241-464bcb254363" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.089 [INFO][4139] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" iface="eth0" netns="/var/run/netns/cni-63e3a769-8a08-1cb9-e241-464bcb254363" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.089 [INFO][4139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.089 [INFO][4139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.113 [INFO][4156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.113 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.114 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.119 [WARNING][4156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.119 [INFO][4156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.120 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.127394 env[1303]: 2025-09-13 00:54:32.122 [INFO][4139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:32.127394 env[1303]: time="2025-09-13T00:54:32.124310552Z" level=info msg="TearDown network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" successfully" Sep 13 00:54:32.127394 env[1303]: time="2025-09-13T00:54:32.124344135Z" level=info msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" returns successfully" Sep 13 00:54:32.127394 env[1303]: time="2025-09-13T00:54:32.125392504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqtkm,Uid:b5d081d4-7d87-4234-8171-fc6646bb9f9b,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:32.128047 kubelet[2119]: E0913 00:54:32.124784 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:32.128822 systemd[1]: run-netns-cni\x2d63e3a769\x2d8a08\x2d1cb9\x2de241\x2d464bcb254363.mount: Deactivated successfully. Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.096 [INFO][4138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.096 [INFO][4138] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" iface="eth0" netns="/var/run/netns/cni-e6becdf8-529e-b289-0a04-19410cd92cb8" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.096 [INFO][4138] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" iface="eth0" netns="/var/run/netns/cni-e6becdf8-529e-b289-0a04-19410cd92cb8" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.097 [INFO][4138] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" iface="eth0" netns="/var/run/netns/cni-e6becdf8-529e-b289-0a04-19410cd92cb8" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.097 [INFO][4138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.097 [INFO][4138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.114 [INFO][4162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.114 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.120 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.127 [WARNING][4162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.127 [INFO][4162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.130 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.135371 env[1303]: 2025-09-13 00:54:32.133 [INFO][4138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:32.140654 env[1303]: time="2025-09-13T00:54:32.139071500Z" level=info msg="TearDown network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" successfully" Sep 13 00:54:32.140654 env[1303]: time="2025-09-13T00:54:32.139107267Z" level=info msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" returns successfully" Sep 13 00:54:32.140654 env[1303]: time="2025-09-13T00:54:32.139889607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-r4qg4,Uid:20e33070-d374-477c-b056-d9ebed8bda5f,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:32.138949 systemd[1]: run-netns-cni\x2de6becdf8\x2d529e\x2db289\x2d0a04\x2d19410cd92cb8.mount: Deactivated successfully. Sep 13 00:54:32.247509 systemd-networkd[1078]: cali6bb8f6089fb: Link UP Sep 13 00:54:32.249703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6bb8f6089fb: link becomes ready Sep 13 00:54:32.249730 systemd-networkd[1078]: cali6bb8f6089fb: Gained carrier Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.185 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0 calico-apiserver-966dc6bcb- calico-apiserver 20e33070-d374-477c-b056-d9ebed8bda5f 1011 0 2025-09-13 00:54:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:966dc6bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-966dc6bcb-r4qg4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6bb8f6089fb [] [] }} ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.185 [INFO][4171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.209 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" HandleID="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.209 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" HandleID="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-966dc6bcb-r4qg4", "timestamp":"2025-09-13 00:54:32.209613226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.209 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.209 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.209 [INFO][4202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.218 [INFO][4202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.223 [INFO][4202] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.226 [INFO][4202] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.228 [INFO][4202] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.230 [INFO][4202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.230 [INFO][4202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.232 [INFO][4202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.236 [INFO][4202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.241 [INFO][4202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.242 [INFO][4202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" host="localhost" Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.242 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.277869 env[1303]: 2025-09-13 00:54:32.242 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" HandleID="k8s-pod-network.50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.244 [INFO][4171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20e33070-d374-477c-b056-d9ebed8bda5f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-966dc6bcb-r4qg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb8f6089fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.244 [INFO][4171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.244 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bb8f6089fb ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.250 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.250 [INFO][4171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20e33070-d374-477c-b056-d9ebed8bda5f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe", Pod:"calico-apiserver-966dc6bcb-r4qg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb8f6089fb", MAC:"2a:05:2f:12:c4:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:32.278805 env[1303]: 2025-09-13 00:54:32.274 [INFO][4171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe" Namespace="calico-apiserver" Pod="calico-apiserver-966dc6bcb-r4qg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:32.287000 audit[4229]: NETFILTER_CFG table=filter:106 family=2 entries=49 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:32.287000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffc2d5d9fe0 a2=0 a3=7ffc2d5d9fcc items=0 ppid=3530 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.287000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:32.294360 env[1303]: time="2025-09-13T00:54:32.294286868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:32.294360 env[1303]: time="2025-09-13T00:54:32.294325501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:32.294360 env[1303]: time="2025-09-13T00:54:32.294334999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:32.294641 env[1303]: time="2025-09-13T00:54:32.294588034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe pid=4238 runtime=io.containerd.runc.v2 Sep 13 00:54:32.317058 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:32.348504 env[1303]: time="2025-09-13T00:54:32.346937465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966dc6bcb-r4qg4,Uid:20e33070-d374-477c-b056-d9ebed8bda5f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe\"" Sep 13 00:54:32.349448 systemd-networkd[1078]: calif09d4481a4c: Link UP Sep 13 00:54:32.356161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:32.356278 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif09d4481a4c: link becomes ready Sep 13 00:54:32.356007 systemd-networkd[1078]: calif09d4481a4c: Gained carrier Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.196 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0 coredns-7c65d6cfc9- kube-system b5d081d4-7d87-4234-8171-fc6646bb9f9b 1010 0 2025-09-13 00:53:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dqtkm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif09d4481a4c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.197 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.228 [INFO][4209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" HandleID="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.228 [INFO][4209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" HandleID="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9100), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dqtkm", "timestamp":"2025-09-13 00:54:32.228762909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.228 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.242 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.242 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.319 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.323 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.326 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.328 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.330 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.330 [INFO][4209] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.332 [INFO][4209] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731 Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.335 [INFO][4209] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.341 [INFO][4209] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.341 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" host="localhost" Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.341 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.366884 env[1303]: 2025-09-13 00:54:32.341 [INFO][4209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" HandleID="k8s-pod-network.72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.344 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5d081d4-7d87-4234-8171-fc6646bb9f9b", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dqtkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif09d4481a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.344 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.344 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif09d4481a4c ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.356 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.357 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5d081d4-7d87-4234-8171-fc6646bb9f9b", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731", Pod:"coredns-7c65d6cfc9-dqtkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif09d4481a4c", MAC:"f6:d3:a1:93:99:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:32.367450 env[1303]: 2025-09-13 00:54:32.365 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqtkm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:32.377040 env[1303]: time="2025-09-13T00:54:32.376970803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:32.377119 env[1303]: time="2025-09-13T00:54:32.377021559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:32.377119 env[1303]: time="2025-09-13T00:54:32.377049752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:32.377271 env[1303]: time="2025-09-13T00:54:32.377236903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731 pid=4290 runtime=io.containerd.runc.v2 Sep 13 00:54:32.379000 audit[4301]: NETFILTER_CFG table=filter:107 family=2 entries=64 op=nft_register_chain pid=4301 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:32.379000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=30156 a0=3 a1=7ffceaa83e80 a2=0 a3=7ffceaa83e6c items=0 ppid=3530 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.379000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:32.397612 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:32.422173 env[1303]: time="2025-09-13T00:54:32.422106552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqtkm,Uid:b5d081d4-7d87-4234-8171-fc6646bb9f9b,Namespace:kube-system,Attempt:1,} returns sandbox id \"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731\"" Sep 13 00:54:32.423034 kubelet[2119]: E0913 00:54:32.423005 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:32.425430 env[1303]: time="2025-09-13T00:54:32.425390531Z" level=info msg="CreateContainer within sandbox \"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:32.442147 env[1303]: time="2025-09-13T00:54:32.442096604Z" level=info msg="CreateContainer within sandbox \"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad890164df89f55086442145b66ad3b5da9a5feb906561f2d1bf4bf81b9851f4\"" Sep 13 00:54:32.442535 env[1303]: time="2025-09-13T00:54:32.442512356Z" level=info msg="StartContainer for \"ad890164df89f55086442145b66ad3b5da9a5feb906561f2d1bf4bf81b9851f4\"" Sep 13 00:54:32.481945 env[1303]: time="2025-09-13T00:54:32.481870060Z" level=info msg="StartContainer for \"ad890164df89f55086442145b66ad3b5da9a5feb906561f2d1bf4bf81b9851f4\" returns successfully" Sep 13 00:54:32.813745 systemd-networkd[1078]: calid7cf627950f: Gained IPv6LL Sep 13 00:54:33.039810 env[1303]: time="2025-09-13T00:54:33.039733796Z" level=info msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" Sep 13 00:54:33.040158 env[1303]: time="2025-09-13T00:54:33.040112627Z" level=info msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.081 [INFO][4384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.081 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" iface="eth0" netns="/var/run/netns/cni-7be0a635-e56a-532d-6919-40820ab35aba" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.081 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" iface="eth0" netns="/var/run/netns/cni-7be0a635-e56a-532d-6919-40820ab35aba" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.081 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" iface="eth0" netns="/var/run/netns/cni-7be0a635-e56a-532d-6919-40820ab35aba" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.082 [INFO][4384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.082 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.105 [INFO][4399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.105 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.105 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.114 [WARNING][4399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.114 [INFO][4399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.116 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:33.120226 env[1303]: 2025-09-13 00:54:33.117 [INFO][4384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:33.123018 systemd[1]: run-netns-cni\x2d7be0a635\x2de56a\x2d532d\x2d6919\x2d40820ab35aba.mount: Deactivated successfully. Sep 13 00:54:33.123638 env[1303]: time="2025-09-13T00:54:33.123505597Z" level=info msg="TearDown network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" successfully" Sep 13 00:54:33.123638 env[1303]: time="2025-09-13T00:54:33.123543268Z" level=info msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" returns successfully" Sep 13 00:54:33.123982 kubelet[2119]: E0913 00:54:33.123920 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:33.124764 env[1303]: time="2025-09-13T00:54:33.124726461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqk2c,Uid:29726fd2-0f28-42d0-a860-baf11550e993,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.098 [INFO][4385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.098 [INFO][4385] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" iface="eth0" netns="/var/run/netns/cni-820bf341-c26c-4142-9d9a-24ff4e94f23a" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.099 [INFO][4385] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" iface="eth0" netns="/var/run/netns/cni-820bf341-c26c-4142-9d9a-24ff4e94f23a" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.099 [INFO][4385] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" iface="eth0" netns="/var/run/netns/cni-820bf341-c26c-4142-9d9a-24ff4e94f23a" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.099 [INFO][4385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.099 [INFO][4385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.133 [INFO][4408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.134 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.134 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.148 [WARNING][4408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.148 [INFO][4408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.150 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:33.153157 env[1303]: 2025-09-13 00:54:33.151 [INFO][4385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:33.153619 env[1303]: time="2025-09-13T00:54:33.153292210Z" level=info msg="TearDown network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" successfully" Sep 13 00:54:33.153619 env[1303]: time="2025-09-13T00:54:33.153319461Z" level=info msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" returns successfully" Sep 13 00:54:33.153966 env[1303]: time="2025-09-13T00:54:33.153939285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-whpk5,Uid:d2542adf-ca6b-4757-9a4c-0ba349d6ae47,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:33.157308 systemd[1]: run-netns-cni\x2d820bf341\x2dc26c\x2d4142\x2d9d9a\x2d24ff4e94f23a.mount: Deactivated successfully. Sep 13 00:54:33.164285 kubelet[2119]: E0913 00:54:33.164260 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:33.197757 systemd-networkd[1078]: cali5297e5d61b0: Gained IPv6LL Sep 13 00:54:33.389709 systemd-networkd[1078]: cali6bb8f6089fb: Gained IPv6LL Sep 13 00:54:33.883615 kubelet[2119]: I0913 00:54:33.883530 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dqtkm" podStartSLOduration=36.883510765 podStartE2EDuration="36.883510765s" podCreationTimestamp="2025-09-13 00:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:33.352312641 +0000 UTC m=+41.398852281" watchObservedRunningTime="2025-09-13 00:54:33.883510765 +0000 UTC m=+41.930050405" Sep 13 00:54:33.902054 systemd-networkd[1078]: calif09d4481a4c: Gained IPv6LL Sep 13 00:54:33.958000 audit[4454]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.958000 audit[4454]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf8b0d680 a2=0 a3=7ffdf8b0d66c items=0 ppid=2266 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:33.966000 audit[4454]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.966000 audit[4454]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf8b0d680 a2=0 a3=0 items=0 ppid=2266 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:33.979000 audit[4464]: NETFILTER_CFG table=filter:110 family=2 entries=17 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.979000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd612e0130 a2=0 a3=7ffd612e011c items=0 ppid=2266 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:33.985000 audit[4464]: NETFILTER_CFG table=nat:111 family=2 entries=35 op=nft_register_chain pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.985000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd612e0130 a2=0 a3=7ffd612e011c items=0 ppid=2266 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:34.014060 systemd-networkd[1078]: cali0591308f647: Link UP Sep 13 00:54:34.015903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:34.016017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0591308f647: link becomes ready Sep 13 00:54:34.016112 systemd-networkd[1078]: cali0591308f647: Gained carrier Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.882 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--whpk5-eth0 goldmane-7988f88666- calico-system d2542adf-ca6b-4757-9a4c-0ba349d6ae47 1035 0 2025-09-13 00:54:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-whpk5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0591308f647 [] [] }} ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.882 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.967 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" HandleID="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.968 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" HandleID="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-whpk5", "timestamp":"2025-09-13 00:54:33.967809907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.968 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.968 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.968 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.976 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.980 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.986 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.988 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.990 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.990 [INFO][4450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.991 [INFO][4450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:33.996 [INFO][4450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:34.008 [INFO][4450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:34.008 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" host="localhost" Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:34.008 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:34.058207 env[1303]: 2025-09-13 00:54:34.009 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" HandleID="k8s-pod-network.c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.011 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--whpk5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d2542adf-ca6b-4757-9a4c-0ba349d6ae47", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-whpk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0591308f647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.011 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.011 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0591308f647 ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.016 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.016 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--whpk5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d2542adf-ca6b-4757-9a4c-0ba349d6ae47", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d", Pod:"goldmane-7988f88666-whpk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0591308f647", MAC:"76:c0:fe:29:9b:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.058895 env[1303]: 2025-09-13 00:54:34.053 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d" Namespace="calico-system" Pod="goldmane-7988f88666-whpk5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:34.070018 env[1303]: time="2025-09-13T00:54:34.069907768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:34.070018 env[1303]: time="2025-09-13T00:54:34.069952352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:34.070018 env[1303]: time="2025-09-13T00:54:34.069962100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:34.071000 audit[4490]: NETFILTER_CFG table=filter:112 family=2 entries=66 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:34.071000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=32768 a0=3 a1=7fff5bab2b80 a2=0 a3=7fff5bab2b6c items=0 ppid=3530 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:34.071000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:34.074467 env[1303]: time="2025-09-13T00:54:34.070118063Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d pid=4482 runtime=io.containerd.runc.v2 Sep 13 00:54:34.081541 env[1303]: time="2025-09-13T00:54:34.081040680Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.090686 env[1303]: time="2025-09-13T00:54:34.083850980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.090686 env[1303]: time="2025-09-13T00:54:34.088217531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.101377 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:34.113490 systemd-networkd[1078]: calid3697549c3a: Link UP Sep 13 00:54:34.115841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid3697549c3a: link becomes ready Sep 13 00:54:34.115683 systemd-networkd[1078]: calid3697549c3a: Gained carrier Sep 13 00:54:34.134471 env[1303]: time="2025-09-13T00:54:34.134369232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-whpk5,Uid:d2542adf-ca6b-4757-9a4c-0ba349d6ae47,Namespace:calico-system,Attempt:1,} returns sandbox id \"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d\"" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:33.867 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0 coredns-7c65d6cfc9- kube-system 29726fd2-0f28-42d0-a860-baf11550e993 1034 0 2025-09-13 00:53:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-xqk2c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid3697549c3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:33.868 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:33.977 [INFO][4448] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" HandleID="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:33.978 [INFO][4448] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" HandleID="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-xqk2c", "timestamp":"2025-09-13 00:54:33.977768293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:33.978 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.009 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.009 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.079 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.084 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.088 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.092 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.094 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.094 [INFO][4448] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.095 [INFO][4448] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.098 [INFO][4448] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.107 [INFO][4448] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.107 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" host="localhost" Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.107 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:34.170192 env[1303]: 2025-09-13 00:54:34.107 [INFO][4448] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" HandleID="k8s-pod-network.bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.109 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"29726fd2-0f28-42d0-a860-baf11550e993", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-xqk2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3697549c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.109 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.109 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3697549c3a ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.115 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.121 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"29726fd2-0f28-42d0-a860-baf11550e993", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca", Pod:"coredns-7c65d6cfc9-xqk2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3697549c3a", MAC:"1e:da:45:37:eb:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.170907 env[1303]: 2025-09-13 00:54:34.161 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xqk2c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:34.175467 kubelet[2119]: E0913 00:54:34.175419 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:34.174000 audit[4529]: NETFILTER_CFG table=filter:113 family=2 entries=54 op=nft_register_chain pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:34.174000 audit[4529]: SYSCALL arch=c000003e syscall=46 success=yes exit=25540 a0=3 a1=7ffd0bb362e0 a2=0 a3=7ffd0bb362cc items=0 ppid=3530 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:34.174000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:34.348895 env[1303]: time="2025-09-13T00:54:34.348838888Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.349332 env[1303]: time="2025-09-13T00:54:34.349291909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:34.350717 env[1303]: time="2025-09-13T00:54:34.350665439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:54:34.351779 env[1303]: time="2025-09-13T00:54:34.351738153Z" level=info msg="CreateContainer within sandbox \"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:34.411690 env[1303]: time="2025-09-13T00:54:34.411584538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:34.411690 env[1303]: time="2025-09-13T00:54:34.411645813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:34.411690 env[1303]: time="2025-09-13T00:54:34.411656975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:34.411917 env[1303]: time="2025-09-13T00:54:34.411810422Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca pid=4538 runtime=io.containerd.runc.v2 Sep 13 00:54:34.420660 env[1303]: time="2025-09-13T00:54:34.420624108Z" level=info msg="CreateContainer within sandbox \"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91f136543ee2c0da5dad22ed98bcd013bd83e98782dbe0d68564c1e7af97f188\"" Sep 13 00:54:34.421458 env[1303]: time="2025-09-13T00:54:34.421421306Z" level=info msg="StartContainer for \"91f136543ee2c0da5dad22ed98bcd013bd83e98782dbe0d68564c1e7af97f188\"" Sep 13 00:54:34.442372 systemd-resolved[1220]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:34.465129 env[1303]: time="2025-09-13T00:54:34.465081727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xqk2c,Uid:29726fd2-0f28-42d0-a860-baf11550e993,Namespace:kube-system,Attempt:1,} returns sandbox id \"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca\"" Sep 13 00:54:34.465933 kubelet[2119]: E0913 00:54:34.465909 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:34.467503 env[1303]: time="2025-09-13T00:54:34.467462539Z" level=info msg="CreateContainer within sandbox \"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:34.485510 env[1303]: time="2025-09-13T00:54:34.485456328Z" level=info msg="CreateContainer within sandbox \"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7050226988040b1e38911dfe6479222ecc10b439fec60bcdf64be5b6599675ea\"" Sep 13 00:54:34.487411 env[1303]: time="2025-09-13T00:54:34.487387425Z" level=info msg="StartContainer for \"7050226988040b1e38911dfe6479222ecc10b439fec60bcdf64be5b6599675ea\"" Sep 13 00:54:34.501244 env[1303]: time="2025-09-13T00:54:34.501210881Z" level=info msg="StartContainer for \"91f136543ee2c0da5dad22ed98bcd013bd83e98782dbe0d68564c1e7af97f188\" returns successfully" Sep 13 00:54:34.528687 env[1303]: time="2025-09-13T00:54:34.527957000Z" level=info msg="StartContainer for \"7050226988040b1e38911dfe6479222ecc10b439fec60bcdf64be5b6599675ea\" returns successfully" Sep 13 00:54:35.181255 kubelet[2119]: E0913 00:54:35.181209 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:35.181694 kubelet[2119]: E0913 00:54:35.181648 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:35.199947 kubelet[2119]: I0913 00:54:35.199671 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-966dc6bcb-g8gcj" podStartSLOduration=25.198658333 podStartE2EDuration="29.199648269s" podCreationTimestamp="2025-09-13 00:54:06 +0000 UTC" firstStartedPulling="2025-09-13 00:54:30.34944525 +0000 UTC m=+38.395984890" lastFinishedPulling="2025-09-13 00:54:34.350435156 +0000 UTC m=+42.396974826" observedRunningTime="2025-09-13 00:54:35.189414016 +0000 UTC m=+43.235953646" watchObservedRunningTime="2025-09-13 00:54:35.199648269 +0000 UTC m=+43.246187909" Sep 13 00:54:35.212607 kernel: kauditd_printk_skb: 173 callbacks suppressed Sep 13 00:54:35.212704 kernel: audit: type=1325 audit(1757724875.204:431): table=filter:114 family=2 entries=14 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:35.212736 kernel: audit: type=1300 audit(1757724875.204:431): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca0ca5ae0 a2=0 a3=7ffca0ca5acc items=0 ppid=2266 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:35.204000 audit[4652]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:35.204000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca0ca5ae0 a2=0 a3=7ffca0ca5acc items=0 ppid=2266 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:35.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:35.218248 kernel: audit: type=1327 audit(1757724875.204:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:35.222000 audit[4652]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:35.222000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca0ca5ae0 a2=0 a3=7ffca0ca5acc items=0 ppid=2266 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:35.232030 kernel: audit: type=1325 audit(1757724875.222:432): table=nat:115 family=2 entries=20 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:35.232135 kernel: audit: type=1300 audit(1757724875.222:432): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca0ca5ae0 a2=0 a3=7ffca0ca5acc items=0 ppid=2266 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:35.232177 kernel: audit: type=1327 audit(1757724875.222:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:35.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:35.309785 systemd-networkd[1078]: cali0591308f647: Gained IPv6LL Sep 13 00:54:35.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.131:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:35.452681 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:50190.service. Sep 13 00:54:35.457592 kernel: audit: type=1130 audit(1757724875.451:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.131:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:35.511000 audit[4659]: USER_ACCT pid=4659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.513403 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:35.515000 audit[4659]: CRED_ACQ pid=4659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.517650 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:35.520865 kernel: audit: type=1101 audit(1757724875.511:434): pid=4659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.521008 kernel: audit: type=1103 audit(1757724875.515:435): pid=4659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.521054 kernel: audit: type=1006 audit(1757724875.515:436): pid=4659 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:54:35.522202 systemd-logind[1289]: New session 10 of user core. Sep 13 00:54:35.523014 systemd[1]: Started session-10.scope. Sep 13 00:54:35.515000 audit[4659]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1152a260 a2=3 a3=0 items=0 ppid=1 pid=4659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:35.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:35.526000 audit[4659]: USER_START pid=4659 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.528000 audit[4662]: CRED_ACQ pid=4662 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.694702 kubelet[2119]: I0913 00:54:35.694635 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xqk2c" podStartSLOduration=38.69461402 podStartE2EDuration="38.69461402s" podCreationTimestamp="2025-09-13 00:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:35.201072384 +0000 UTC m=+43.247612024" watchObservedRunningTime="2025-09-13 00:54:35.69461402 +0000 UTC m=+43.741153660" Sep 13 00:54:35.830411 sshd[4659]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:35.830000 audit[4659]: USER_END pid=4659 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.830000 audit[4659]: CRED_DISP pid=4659 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:35.832853 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:50190.service: Deactivated successfully. Sep 13 00:54:35.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.131:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:35.833955 systemd-logind[1289]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:54:35.834018 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:54:35.834941 systemd-logind[1289]: Removed session 10. Sep 13 00:54:35.885770 systemd-networkd[1078]: calid3697549c3a: Gained IPv6LL Sep 13 00:54:36.084022 env[1303]: time="2025-09-13T00:54:36.083854729Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.085746 env[1303]: time="2025-09-13T00:54:36.085696559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.087100 env[1303]: time="2025-09-13T00:54:36.087071411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.088309 env[1303]: time="2025-09-13T00:54:36.088275683Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.088886 env[1303]: time="2025-09-13T00:54:36.088851384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:54:36.089956 env[1303]: time="2025-09-13T00:54:36.089929971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:54:36.090883 env[1303]: time="2025-09-13T00:54:36.090856601Z" level=info msg="CreateContainer within sandbox \"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:54:36.105350 env[1303]: time="2025-09-13T00:54:36.105302173Z" level=info msg="CreateContainer within sandbox \"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ddd2dff1327514a71d0ea6cc3e53a2ba86ba27a07ed0e4e0b31023327d0aa542\"" Sep 13 00:54:36.105817 env[1303]: time="2025-09-13T00:54:36.105776424Z" level=info msg="StartContainer for \"ddd2dff1327514a71d0ea6cc3e53a2ba86ba27a07ed0e4e0b31023327d0aa542\"" Sep 13 00:54:36.152627 env[1303]: time="2025-09-13T00:54:36.152577087Z" level=info msg="StartContainer for \"ddd2dff1327514a71d0ea6cc3e53a2ba86ba27a07ed0e4e0b31023327d0aa542\" returns successfully" Sep 13 00:54:36.185108 kubelet[2119]: E0913 00:54:36.185074 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:36.237000 audit[4708]: NETFILTER_CFG table=filter:116 family=2 entries=13 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:36.237000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc8cfc7c70 a2=0 a3=7ffc8cfc7c5c items=0 ppid=2266 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:36.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:36.249000 audit[4708]: NETFILTER_CFG table=nat:117 family=2 entries=63 op=nft_register_chain pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:36.249000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=23436 a0=3 a1=7ffc8cfc7c70 a2=0 a3=7ffc8cfc7c5c items=0 ppid=2266 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:36.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:36.421356 kubelet[2119]: I0913 00:54:36.421299 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:37.102748 systemd[1]: run-containerd-runc-k8s.io-da2128f64d786db12eb0b474b202971ce4ad11f6ec469ed4cfcd360a56affb4b-runc.MSFYmI.mount: Deactivated successfully. Sep 13 00:54:37.187334 kubelet[2119]: E0913 00:54:37.187306 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.454543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403019964.mount: Deactivated successfully. Sep 13 00:54:38.474391 env[1303]: time="2025-09-13T00:54:38.474318650Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.478217 env[1303]: time="2025-09-13T00:54:38.478176676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.480379 env[1303]: time="2025-09-13T00:54:38.480332375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.482221 env[1303]: time="2025-09-13T00:54:38.482198670Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.482710 env[1303]: time="2025-09-13T00:54:38.482686106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:54:38.484103 env[1303]: time="2025-09-13T00:54:38.483920634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:54:38.485197 env[1303]: time="2025-09-13T00:54:38.485161555Z" level=info msg="CreateContainer within sandbox \"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:54:38.498745 env[1303]: time="2025-09-13T00:54:38.498629590Z" level=info msg="CreateContainer within sandbox \"ee6170b2586b2013d4cec08e61ba5a98050474cadb377e9a43587aada191ddd1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bb110a3bfd4fa99ffdd478f1f8a68c6dbfb0ae692db89befed6774454f31a797\"" Sep 13 00:54:38.499339 env[1303]: time="2025-09-13T00:54:38.499315368Z" level=info msg="StartContainer for \"bb110a3bfd4fa99ffdd478f1f8a68c6dbfb0ae692db89befed6774454f31a797\"" Sep 13 00:54:38.560827 env[1303]: time="2025-09-13T00:54:38.560763184Z" level=info msg="StartContainer for \"bb110a3bfd4fa99ffdd478f1f8a68c6dbfb0ae692db89befed6774454f31a797\" returns successfully" Sep 13 00:54:39.200529 kubelet[2119]: I0913 00:54:39.200275 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-874dd5657-lw978" podStartSLOduration=1.491561501 podStartE2EDuration="10.200259899s" podCreationTimestamp="2025-09-13 00:54:29 +0000 UTC" firstStartedPulling="2025-09-13 00:54:29.775038201 +0000 UTC m=+37.821577841" lastFinishedPulling="2025-09-13 00:54:38.483736599 +0000 UTC m=+46.530276239" observedRunningTime="2025-09-13 00:54:39.199651246 +0000 UTC m=+47.246190886" watchObservedRunningTime="2025-09-13 00:54:39.200259899 +0000 UTC m=+47.246799529" Sep 13 00:54:39.212000 audit[4794]: NETFILTER_CFG table=filter:118 family=2 entries=11 op=nft_register_rule pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.212000 audit[4794]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff7016a840 a2=0 a3=7fff7016a82c items=0 ppid=2266 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:39.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:39.217000 audit[4794]: NETFILTER_CFG table=nat:119 family=2 entries=29 op=nft_register_chain pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.217000 audit[4794]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff7016a840 a2=0 a3=7fff7016a82c items=0 ppid=2266 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:39.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:40.833914 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:42740.service. Sep 13 00:54:40.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.131:22-10.0.0.1:42740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:40.835332 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:54:40.835396 kernel: audit: type=1130 audit(1757724880.832:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.131:22-10.0.0.1:42740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:40.911000 audit[4801]: USER_ACCT pid=4801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.914179 sshd[4801]: Accepted publickey for core from 10.0.0.1 port 42740 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:40.916228 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:40.911000 audit[4801]: CRED_ACQ pid=4801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.921662 systemd-logind[1289]: New session 11 of user core. Sep 13 00:54:40.922537 kernel: audit: type=1101 audit(1757724880.911:447): pid=4801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.922609 kernel: audit: type=1103 audit(1757724880.911:448): pid=4801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.922683 kernel: audit: type=1006 audit(1757724880.911:449): pid=4801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 13 00:54:40.922411 systemd[1]: Started session-11.scope. Sep 13 00:54:40.911000 audit[4801]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5ac8dfc0 a2=3 a3=0 items=0 ppid=1 pid=4801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:40.930080 kernel: audit: type=1300 audit(1757724880.911:449): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5ac8dfc0 a2=3 a3=0 items=0 ppid=1 pid=4801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:40.930264 kernel: audit: type=1327 audit(1757724880.911:449): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:40.911000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:40.924000 audit[4801]: USER_START pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.936222 kernel: audit: type=1105 audit(1757724880.924:450): pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.936313 kernel: audit: type=1103 audit(1757724880.929:451): pid=4804 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:40.929000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:41.262448 sshd[4801]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:41.262000 audit[4801]: USER_END pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:41.285932 kernel: audit: type=1106 audit(1757724881.262:452): pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:41.284852 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:42740.service: Deactivated successfully. Sep 13 00:54:41.285645 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:54:41.286806 systemd-logind[1289]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:54:41.262000 audit[4801]: CRED_DISP pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:41.287712 systemd-logind[1289]: Removed session 11. Sep 13 00:54:41.291623 kernel: audit: type=1104 audit(1757724881.262:453): pid=4801 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:41.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.131:22-10.0.0.1:42740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:42.903964 env[1303]: time="2025-09-13T00:54:42.903912806Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.906132 env[1303]: time="2025-09-13T00:54:42.906099652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.908077 env[1303]: time="2025-09-13T00:54:42.908028054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.909742 env[1303]: time="2025-09-13T00:54:42.909693722Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.910081 env[1303]: time="2025-09-13T00:54:42.910022009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:54:42.911555 env[1303]: time="2025-09-13T00:54:42.911513820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:42.925685 env[1303]: time="2025-09-13T00:54:42.925629228Z" level=info msg="CreateContainer within sandbox \"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:54:42.945017 env[1303]: time="2025-09-13T00:54:42.944963666Z" level=info msg="CreateContainer within sandbox \"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4957b8bdb65068fed31cf8365ce9bfe04f3bd57f7a2bcce341e3c28309a39e7b\"" Sep 13 00:54:42.945606 env[1303]: time="2025-09-13T00:54:42.945536132Z" level=info msg="StartContainer for \"4957b8bdb65068fed31cf8365ce9bfe04f3bd57f7a2bcce341e3c28309a39e7b\"" Sep 13 00:54:43.006290 env[1303]: time="2025-09-13T00:54:43.006242153Z" level=info msg="StartContainer for \"4957b8bdb65068fed31cf8365ce9bfe04f3bd57f7a2bcce341e3c28309a39e7b\" returns successfully" Sep 13 00:54:43.209466 kubelet[2119]: I0913 00:54:43.209334 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-798cbcbbb6-8gg8t" podStartSLOduration=22.833649333 podStartE2EDuration="34.209319539s" podCreationTimestamp="2025-09-13 00:54:09 +0000 UTC" firstStartedPulling="2025-09-13 00:54:31.535318498 +0000 UTC m=+39.581858138" lastFinishedPulling="2025-09-13 00:54:42.910988704 +0000 UTC m=+50.957528344" observedRunningTime="2025-09-13 00:54:43.208195498 +0000 UTC m=+51.254735139" watchObservedRunningTime="2025-09-13 00:54:43.209319539 +0000 UTC m=+51.255859169" Sep 13 00:54:43.418445 env[1303]: time="2025-09-13T00:54:43.418399808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.420337 env[1303]: time="2025-09-13T00:54:43.420302591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.422120 env[1303]: time="2025-09-13T00:54:43.422081392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.423871 env[1303]: time="2025-09-13T00:54:43.423819917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.424228 env[1303]: time="2025-09-13T00:54:43.424199880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:43.425291 env[1303]: time="2025-09-13T00:54:43.425266834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:54:43.426427 env[1303]: time="2025-09-13T00:54:43.426395052Z" level=info msg="CreateContainer within sandbox \"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:43.438553 env[1303]: time="2025-09-13T00:54:43.438513630Z" level=info msg="CreateContainer within sandbox \"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f8ef0652df4f4df9dc3f6d72065d10b3f13482adcb45391561c233ffd5df1d22\"" Sep 13 00:54:43.438928 env[1303]: time="2025-09-13T00:54:43.438893112Z" level=info msg="StartContainer for \"f8ef0652df4f4df9dc3f6d72065d10b3f13482adcb45391561c233ffd5df1d22\"" Sep 13 00:54:43.491166 env[1303]: time="2025-09-13T00:54:43.491010924Z" level=info msg="StartContainer for \"f8ef0652df4f4df9dc3f6d72065d10b3f13482adcb45391561c233ffd5df1d22\" returns successfully" Sep 13 00:54:44.225000 audit[4926]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.225000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffece51aa40 a2=0 a3=7ffece51aa2c items=0 ppid=2266 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:44.229000 audit[4926]: NETFILTER_CFG table=nat:121 family=2 entries=32 op=nft_register_rule pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.229000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffece51aa40 a2=0 a3=7ffece51aa2c items=0 ppid=2266 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:45.327037 kubelet[2119]: I0913 00:54:45.326996 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:46.272720 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:54:46.272863 kernel: audit: type=1130 audit(1757724886.264:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.131:22-10.0.0.1:42756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.131:22-10.0.0.1:42756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.266221 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:42756.service. Sep 13 00:54:46.309000 audit[4927]: USER_ACCT pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.310945 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 42756 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:46.324576 kernel: audit: type=1101 audit(1757724886.309:458): pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.324638 kernel: audit: type=1103 audit(1757724886.313:459): pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.324661 kernel: audit: type=1006 audit(1757724886.313:460): pid=4927 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 13 00:54:46.324681 kernel: audit: type=1300 audit(1757724886.313:460): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51e16bb0 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:46.313000 audit[4927]: CRED_ACQ pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.313000 audit[4927]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd51e16bb0 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:46.315226 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:46.313000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:46.326588 kernel: audit: type=1327 audit(1757724886.313:460): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:46.327646 systemd-logind[1289]: New session 12 of user core. Sep 13 00:54:46.328623 systemd[1]: Started session-12.scope. Sep 13 00:54:46.331000 audit[4927]: USER_START pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.332000 audit[4930]: CRED_ACQ pid=4930 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.340153 kernel: audit: type=1105 audit(1757724886.331:461): pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.340192 kernel: audit: type=1103 audit(1757724886.332:462): pid=4930 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.544959 sshd[4927]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:46.544000 audit[4927]: USER_END pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.544000 audit[4927]: CRED_DISP pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.547974 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:42762.service. Sep 13 00:54:46.548458 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:42756.service: Deactivated successfully. Sep 13 00:54:46.549214 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:54:46.554015 systemd-logind[1289]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:54:46.555697 kernel: audit: type=1106 audit(1757724886.544:463): pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.555757 kernel: audit: type=1104 audit(1757724886.544:464): pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.131:22-10.0.0.1:42762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.131:22-10.0.0.1:42756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.556231 systemd-logind[1289]: Removed session 12. Sep 13 00:54:46.591000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.592815 sshd[4942]: Accepted publickey for core from 10.0.0.1 port 42762 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:46.592000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.592000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd044f52b0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:46.592000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:46.594075 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:46.597559 systemd-logind[1289]: New session 13 of user core. Sep 13 00:54:46.598641 systemd[1]: Started session-13.scope. Sep 13 00:54:46.602000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.603000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.648917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143903623.mount: Deactivated successfully. Sep 13 00:54:46.780671 sshd[4942]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:46.782965 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:42770.service. Sep 13 00:54:46.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.131:22-10.0.0.1:42770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.788000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.789000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.792595 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:42762.service: Deactivated successfully. Sep 13 00:54:46.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.131:22-10.0.0.1:42762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.793882 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:54:46.794600 systemd-logind[1289]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:54:46.795840 systemd-logind[1289]: Removed session 13. Sep 13 00:54:46.832760 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:46.831000 audit[4954]: USER_ACCT pid=4954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.832000 audit[4954]: CRED_ACQ pid=4954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.832000 audit[4954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4d7fff90 a2=3 a3=0 items=0 ppid=1 pid=4954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:46.832000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:46.834126 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:46.841971 systemd-logind[1289]: New session 14 of user core. Sep 13 00:54:46.842596 systemd[1]: Started session-14.scope. Sep 13 00:54:46.851000 audit[4954]: USER_START pid=4954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.853000 audit[4959]: CRED_ACQ pid=4959 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.974120 sshd[4954]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:46.973000 audit[4954]: USER_END pid=4954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.974000 audit[4954]: CRED_DISP pid=4954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:46.977809 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:42770.service: Deactivated successfully. Sep 13 00:54:46.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.131:22-10.0.0.1:42770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:46.979061 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:54:46.979524 systemd-logind[1289]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:54:46.980327 systemd-logind[1289]: Removed session 14. Sep 13 00:54:47.674507 env[1303]: time="2025-09-13T00:54:47.674435628Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.676249 env[1303]: time="2025-09-13T00:54:47.676196925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.677968 env[1303]: time="2025-09-13T00:54:47.677927895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.679362 env[1303]: time="2025-09-13T00:54:47.679316082Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.679903 env[1303]: time="2025-09-13T00:54:47.679866486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:54:47.681014 env[1303]: time="2025-09-13T00:54:47.680978494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:54:47.686712 env[1303]: time="2025-09-13T00:54:47.686668357Z" level=info msg="CreateContainer within sandbox \"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:54:47.701316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371390281.mount: Deactivated successfully. Sep 13 00:54:47.704676 env[1303]: time="2025-09-13T00:54:47.704636217Z" level=info msg="CreateContainer within sandbox \"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"203fd7759f84f2fec55b5a1bf6297c1ac1a67a3003576b0ecb440e3eb8bed784\"" Sep 13 00:54:47.706414 env[1303]: time="2025-09-13T00:54:47.705141405Z" level=info msg="StartContainer for \"203fd7759f84f2fec55b5a1bf6297c1ac1a67a3003576b0ecb440e3eb8bed784\"" Sep 13 00:54:47.887416 env[1303]: time="2025-09-13T00:54:47.887328255Z" level=info msg="StartContainer for \"203fd7759f84f2fec55b5a1bf6297c1ac1a67a3003576b0ecb440e3eb8bed784\" returns successfully" Sep 13 00:54:48.689131 kubelet[2119]: I0913 00:54:48.686139 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-966dc6bcb-r4qg4" podStartSLOduration=31.610316485 podStartE2EDuration="42.686119749s" podCreationTimestamp="2025-09-13 00:54:06 +0000 UTC" firstStartedPulling="2025-09-13 00:54:32.349322595 +0000 UTC m=+40.395862225" lastFinishedPulling="2025-09-13 00:54:43.425125849 +0000 UTC m=+51.471665489" observedRunningTime="2025-09-13 00:54:44.215310698 +0000 UTC m=+52.261850338" watchObservedRunningTime="2025-09-13 00:54:48.686119749 +0000 UTC m=+56.732659389" Sep 13 00:54:48.689131 kubelet[2119]: I0913 00:54:48.688902 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-whpk5" podStartSLOduration=27.143554723 podStartE2EDuration="40.688894069s" podCreationTimestamp="2025-09-13 00:54:08 +0000 UTC" firstStartedPulling="2025-09-13 00:54:34.135421368 +0000 UTC m=+42.181961008" lastFinishedPulling="2025-09-13 00:54:47.680760724 +0000 UTC m=+55.727300354" observedRunningTime="2025-09-13 00:54:48.685925745 +0000 UTC m=+56.732465385" watchObservedRunningTime="2025-09-13 00:54:48.688894069 +0000 UTC m=+56.735433709" Sep 13 00:54:48.716000 audit[5031]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:48.716000 audit[5031]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdde1839b0 a2=0 a3=7ffdde18399c items=0 ppid=2266 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:48.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:48.721000 audit[5031]: NETFILTER_CFG table=nat:123 family=2 entries=24 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:48.721000 audit[5031]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffdde1839b0 a2=0 a3=7ffdde18399c items=0 ppid=2266 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:48.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:49.342244 env[1303]: time="2025-09-13T00:54:49.342176024Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.343770 env[1303]: time="2025-09-13T00:54:49.343735572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.345224 env[1303]: time="2025-09-13T00:54:49.345188150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.346587 env[1303]: time="2025-09-13T00:54:49.346510553Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.346944 env[1303]: time="2025-09-13T00:54:49.346908571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:54:49.349064 env[1303]: time="2025-09-13T00:54:49.349027059Z" level=info msg="CreateContainer within sandbox \"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:54:49.362005 env[1303]: time="2025-09-13T00:54:49.361963228Z" level=info msg="CreateContainer within sandbox \"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7e82cdc9901149fd5ff2e9459c31968b268fee01c2ad88dfe97ef67fb83ac05\"" Sep 13 00:54:49.362448 env[1303]: time="2025-09-13T00:54:49.362404077Z" level=info msg="StartContainer for \"f7e82cdc9901149fd5ff2e9459c31968b268fee01c2ad88dfe97ef67fb83ac05\"" Sep 13 00:54:49.403610 env[1303]: time="2025-09-13T00:54:49.403536500Z" level=info msg="StartContainer for \"f7e82cdc9901149fd5ff2e9459c31968b268fee01c2ad88dfe97ef67fb83ac05\" returns successfully" Sep 13 00:54:49.677774 kubelet[2119]: I0913 00:54:49.677699 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzvl6" podStartSLOduration=22.760466255 podStartE2EDuration="40.677681858s" podCreationTimestamp="2025-09-13 00:54:09 +0000 UTC" firstStartedPulling="2025-09-13 00:54:31.430315677 +0000 UTC m=+39.476855307" lastFinishedPulling="2025-09-13 00:54:49.34753127 +0000 UTC m=+57.394070910" observedRunningTime="2025-09-13 00:54:49.677466604 +0000 UTC m=+57.724006244" watchObservedRunningTime="2025-09-13 00:54:49.677681858 +0000 UTC m=+57.724221498" Sep 13 00:54:50.135015 kubelet[2119]: I0913 00:54:50.134891 2119 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:54:50.135015 kubelet[2119]: I0913 00:54:50.134942 2119 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:54:50.845074 kubelet[2119]: I0913 00:54:50.845021 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:50.871000 audit[5097]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.871000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdecbf57d0 a2=0 a3=7ffdecbf57bc items=0 ppid=2266 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:50.876000 audit[5097]: NETFILTER_CFG table=nat:125 family=2 entries=36 op=nft_register_chain pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.876000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffdecbf57d0 a2=0 a3=7ffdecbf57bc items=0 ppid=2266 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.131:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.977664 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:54974.service. Sep 13 00:54:51.979095 kernel: kauditd_printk_skb: 35 callbacks suppressed Sep 13 00:54:51.979224 kernel: audit: type=1130 audit(1757724891.976:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.131:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:52.020000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.024000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.026265 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:52.026522 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:52.026769 kernel: audit: type=1101 audit(1757724892.020:489): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.026865 kernel: audit: type=1103 audit(1757724892.024:490): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.032548 kernel: audit: type=1006 audit(1757724892.024:491): pid=5098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:54:52.032657 env[1303]: time="2025-09-13T00:54:52.030496882Z" level=info msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" Sep 13 00:54:52.024000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe68879d80 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.080142 kernel: audit: type=1300 audit(1757724892.024:491): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe68879d80 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.080216 kernel: audit: type=1327 audit(1757724892.024:491): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:52.080236 kernel: audit: type=1105 audit(1757724892.037:492): pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.080251 kernel: audit: type=1103 audit(1757724892.039:493): pid=5106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.024000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:52.037000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.039000 audit[5106]: CRED_ACQ pid=5106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.034637 systemd[1]: Started session-15.scope. Sep 13 00:54:52.035184 systemd-logind[1289]: New session 15 of user core. Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.109 [WARNING][5114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b", Pod:"calico-apiserver-966dc6bcb-g8gcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03cc84b9fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.110 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.110 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" iface="eth0" netns="" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.110 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.110 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.144 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.144 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.145 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.150 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.150 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.152 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.156534 env[1303]: 2025-09-13 00:54:52.154 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.157152 env[1303]: time="2025-09-13T00:54:52.156570790Z" level=info msg="TearDown network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" successfully" Sep 13 00:54:52.157152 env[1303]: time="2025-09-13T00:54:52.156603982Z" level=info msg="StopPodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" returns successfully" Sep 13 00:54:52.163843 env[1303]: time="2025-09-13T00:54:52.157266907Z" level=info msg="RemovePodSandbox for \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" Sep 13 00:54:52.163843 env[1303]: time="2025-09-13T00:54:52.157296512Z" level=info msg="Forcibly stopping sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\"" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.192 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1fe4a8-3bfe-4866-b68f-127f3e0fe41c", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cbe45c50f466f1433d9f803ba4fe4e4ee29d4db003c875f1eb15ec07157b76b", Pod:"calico-apiserver-966dc6bcb-g8gcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03cc84b9fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.192 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.192 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" iface="eth0" netns="" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.192 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.192 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.212 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.212 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.212 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.218 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.218 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" HandleID="k8s-pod-network.6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Workload="localhost-k8s-calico--apiserver--966dc6bcb--g8gcj-eth0" Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.219 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.223776 env[1303]: 2025-09-13 00:54:52.221 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1" Sep 13 00:54:52.224427 env[1303]: time="2025-09-13T00:54:52.223810850Z" level=info msg="TearDown network for sandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" successfully" Sep 13 00:54:52.464408 env[1303]: time="2025-09-13T00:54:52.464342223Z" level=info msg="RemovePodSandbox \"6555135c9fac1bfa87154f9a823849cace9df80016cb12186b07c34cab50bba1\" returns successfully" Sep 13 00:54:52.465052 env[1303]: time="2025-09-13T00:54:52.464962659Z" level=info msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.500 [WARNING][5172] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" WorkloadEndpoint="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.500 [INFO][5172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.500 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" iface="eth0" netns="" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.500 [INFO][5172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.500 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.522 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.523 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.523 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.529 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.529 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.531 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.535129 env[1303]: 2025-09-13 00:54:52.533 [INFO][5172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.535689 env[1303]: time="2025-09-13T00:54:52.535158480Z" level=info msg="TearDown network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" successfully" Sep 13 00:54:52.535689 env[1303]: time="2025-09-13T00:54:52.535194187Z" level=info msg="StopPodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" returns successfully" Sep 13 00:54:52.535890 env[1303]: time="2025-09-13T00:54:52.535824069Z" level=info msg="RemovePodSandbox for \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" Sep 13 00:54:52.535934 env[1303]: time="2025-09-13T00:54:52.535870697Z" level=info msg="Forcibly stopping sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\"" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.569 [WARNING][5199] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" WorkloadEndpoint="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.569 [INFO][5199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.569 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" iface="eth0" netns="" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.569 [INFO][5199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.569 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.593 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.593 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.593 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.617 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.617 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" HandleID="k8s-pod-network.4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Workload="localhost-k8s-whisker--55cf57d69d--jj7b6-eth0" Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.618 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.622361 env[1303]: 2025-09-13 00:54:52.620 [INFO][5199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5" Sep 13 00:54:52.622855 env[1303]: time="2025-09-13T00:54:52.622406998Z" level=info msg="TearDown network for sandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" successfully" Sep 13 00:54:52.799389 env[1303]: time="2025-09-13T00:54:52.798710516Z" level=info msg="RemovePodSandbox \"4ecc32a80e45745cb49b493dc86f325c1081859280308578490de8129a84acb5\" returns successfully" Sep 13 00:54:52.799389 env[1303]: time="2025-09-13T00:54:52.799278854Z" level=info msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.832 [WARNING][5227] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5d081d4-7d87-4234-8171-fc6646bb9f9b", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731", Pod:"coredns-7c65d6cfc9-dqtkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif09d4481a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.833 [INFO][5227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.833 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" iface="eth0" netns="" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.833 [INFO][5227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.833 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.852 [INFO][5236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.855 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.855 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.865 [WARNING][5236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.865 [INFO][5236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.866 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.870749 env[1303]: 2025-09-13 00:54:52.868 [INFO][5227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.871944 env[1303]: time="2025-09-13T00:54:52.870794482Z" level=info msg="TearDown network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" successfully" Sep 13 00:54:52.871944 env[1303]: time="2025-09-13T00:54:52.870824789Z" level=info msg="StopPodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" returns successfully" Sep 13 00:54:52.871944 env[1303]: time="2025-09-13T00:54:52.871378058Z" level=info msg="RemovePodSandbox for \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" Sep 13 00:54:52.871944 env[1303]: time="2025-09-13T00:54:52.871418493Z" level=info msg="Forcibly stopping sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\"" Sep 13 00:54:52.915992 sshd[5098]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:52.915000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.918294 systemd-logind[1289]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:54:52.915000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.919194 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:54974.service: Deactivated successfully. Sep 13 00:54:52.919929 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:54:52.920757 systemd-logind[1289]: Removed session 15. Sep 13 00:54:52.925093 kernel: audit: type=1106 audit(1757724892.915:494): pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.925199 kernel: audit: type=1104 audit(1757724892.915:495): pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:52.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.131:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.912 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b5d081d4-7d87-4234-8171-fc6646bb9f9b", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72f879cd5c5b9086fa1276238e20291ef773ec1d84ac63250ce449b7105aa731", Pod:"coredns-7c65d6cfc9-dqtkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif09d4481a4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.913 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.913 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" iface="eth0" netns="" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.913 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.913 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.934 [INFO][5264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.934 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.934 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.940 [WARNING][5264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.940 [INFO][5264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" HandleID="k8s-pod-network.a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Workload="localhost-k8s-coredns--7c65d6cfc9--dqtkm-eth0" Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.941 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:52.945611 env[1303]: 2025-09-13 00:54:52.943 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a" Sep 13 00:54:52.946029 env[1303]: time="2025-09-13T00:54:52.945644351Z" level=info msg="TearDown network for sandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" successfully" Sep 13 00:54:52.949067 env[1303]: time="2025-09-13T00:54:52.949023786Z" level=info msg="RemovePodSandbox \"a5b01f142ff214dfc091098b9089378b10577930fa5b84138ac45361a6cb4d4a\" returns successfully" Sep 13 00:54:52.949533 env[1303]: time="2025-09-13T00:54:52.949507505Z" level=info msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.978 [WARNING][5284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--whpk5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d2542adf-ca6b-4757-9a4c-0ba349d6ae47", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d", Pod:"goldmane-7988f88666-whpk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0591308f647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.979 [INFO][5284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.979 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" iface="eth0" netns="" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.979 [INFO][5284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.979 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.996 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.997 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:52.997 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:53.004 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:53.004 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:53.006 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.009891 env[1303]: 2025-09-13 00:54:53.008 [INFO][5284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.010449 env[1303]: time="2025-09-13T00:54:53.009905879Z" level=info msg="TearDown network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" successfully" Sep 13 00:54:53.010449 env[1303]: time="2025-09-13T00:54:53.009937578Z" level=info msg="StopPodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" returns successfully" Sep 13 00:54:53.010498 env[1303]: time="2025-09-13T00:54:53.010454529Z" level=info msg="RemovePodSandbox for \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" Sep 13 00:54:53.010525 env[1303]: time="2025-09-13T00:54:53.010492060Z" level=info msg="Forcibly stopping sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\"" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.040 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--whpk5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d2542adf-ca6b-4757-9a4c-0ba349d6ae47", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c29d23b48eb44337b078b0d8f82bc308ae44e39affec907500b94c30a3460e8d", Pod:"goldmane-7988f88666-whpk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0591308f647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.040 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.040 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" iface="eth0" netns="" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.040 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.040 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.061 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.061 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.061 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.066 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.066 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" HandleID="k8s-pod-network.8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Workload="localhost-k8s-goldmane--7988f88666--whpk5-eth0" Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.068 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.072399 env[1303]: 2025-09-13 00:54:53.070 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc" Sep 13 00:54:53.072399 env[1303]: time="2025-09-13T00:54:53.072360292Z" level=info msg="TearDown network for sandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" successfully" Sep 13 00:54:53.079373 env[1303]: time="2025-09-13T00:54:53.079343293Z" level=info msg="RemovePodSandbox \"8654427b995d480b8d0a0e472da4ce674c497c52a47649e8a335de5522a56fbc\" returns successfully" Sep 13 00:54:53.079874 env[1303]: time="2025-09-13T00:54:53.079839175Z" level=info msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.110 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"29726fd2-0f28-42d0-a860-baf11550e993", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca", Pod:"coredns-7c65d6cfc9-xqk2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3697549c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.110 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.110 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" iface="eth0" netns="" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.110 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.110 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.129 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.129 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.129 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.134 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.134 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.136 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.139656 env[1303]: 2025-09-13 00:54:53.137 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.140173 env[1303]: time="2025-09-13T00:54:53.139695160Z" level=info msg="TearDown network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" successfully" Sep 13 00:54:53.140173 env[1303]: time="2025-09-13T00:54:53.139727009Z" level=info msg="StopPodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" returns successfully" Sep 13 00:54:53.140285 env[1303]: time="2025-09-13T00:54:53.140251314Z" level=info msg="RemovePodSandbox for \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" Sep 13 00:54:53.140318 env[1303]: time="2025-09-13T00:54:53.140290918Z" level=info msg="Forcibly stopping sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\"" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.170 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"29726fd2-0f28-42d0-a860-baf11550e993", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb05a143915428c226f7f4a95bc6cbf4e96ad85b9e7d51879ca2340c601abeca", Pod:"coredns-7c65d6cfc9-xqk2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3697549c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.170 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.170 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" iface="eth0" netns="" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.170 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.170 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.188 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.188 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.188 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.195 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.195 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" HandleID="k8s-pod-network.fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--xqk2c-eth0" Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.197 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.200786 env[1303]: 2025-09-13 00:54:53.198 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0" Sep 13 00:54:53.201546 env[1303]: time="2025-09-13T00:54:53.200812192Z" level=info msg="TearDown network for sandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" successfully" Sep 13 00:54:53.204162 env[1303]: time="2025-09-13T00:54:53.204137756Z" level=info msg="RemovePodSandbox \"fffc1fef8467b0f6284fb446f8684cf7e3bed0a9b72adce4723c8fc352380aa0\" returns successfully" Sep 13 00:54:53.204719 env[1303]: time="2025-09-13T00:54:53.204661880Z" level=info msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.234 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20e33070-d374-477c-b056-d9ebed8bda5f", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe", Pod:"calico-apiserver-966dc6bcb-r4qg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb8f6089fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.234 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.234 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" iface="eth0" netns="" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.234 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.234 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.253 [INFO][5398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.253 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.253 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.258 [WARNING][5398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.258 [INFO][5398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.260 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.264419 env[1303]: 2025-09-13 00:54:53.262 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.265672 env[1303]: time="2025-09-13T00:54:53.264446601Z" level=info msg="TearDown network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" successfully" Sep 13 00:54:53.265672 env[1303]: time="2025-09-13T00:54:53.264479222Z" level=info msg="StopPodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" returns successfully" Sep 13 00:54:53.265672 env[1303]: time="2025-09-13T00:54:53.264955296Z" level=info msg="RemovePodSandbox for \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" Sep 13 00:54:53.265672 env[1303]: time="2025-09-13T00:54:53.264985794Z" level=info msg="Forcibly stopping sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\"" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.295 [WARNING][5416] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0", GenerateName:"calico-apiserver-966dc6bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"20e33070-d374-477c-b056-d9ebed8bda5f", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966dc6bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50bc1c9b5ff5a71f1e213f89ff93c4a237589f7a0b6091abe4d6b2095a2225fe", Pod:"calico-apiserver-966dc6bcb-r4qg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb8f6089fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.295 [INFO][5416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.295 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" iface="eth0" netns="" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.295 [INFO][5416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.296 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.326 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.326 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.326 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.332 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.332 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" HandleID="k8s-pod-network.a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Workload="localhost-k8s-calico--apiserver--966dc6bcb--r4qg4-eth0" Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.333 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.337231 env[1303]: 2025-09-13 00:54:53.335 [INFO][5416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6" Sep 13 00:54:53.337231 env[1303]: time="2025-09-13T00:54:53.337182279Z" level=info msg="TearDown network for sandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" successfully" Sep 13 00:54:53.341118 env[1303]: time="2025-09-13T00:54:53.341093703Z" level=info msg="RemovePodSandbox \"a879b8dc538223a7d929321bf18041a24b3ad99fe4bb38636e1067c584e661c6\" returns successfully" Sep 13 00:54:53.341645 env[1303]: time="2025-09-13T00:54:53.341597519Z" level=info msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.375 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0", GenerateName:"calico-kube-controllers-798cbcbbb6-", Namespace:"calico-system", SelfLink:"", UID:"e8351b28-7c27-4f21-ad76-83e2e206ba63", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cbcbbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f", Pod:"calico-kube-controllers-798cbcbbb6-8gg8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5297e5d61b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.375 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.375 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" iface="eth0" netns="" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.375 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.375 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.404 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.405 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.405 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.417 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.417 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.419 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.423587 env[1303]: 2025-09-13 00:54:53.420 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.424072 env[1303]: time="2025-09-13T00:54:53.423637727Z" level=info msg="TearDown network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" successfully" Sep 13 00:54:53.424072 env[1303]: time="2025-09-13T00:54:53.423678162Z" level=info msg="StopPodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" returns successfully" Sep 13 00:54:53.424306 env[1303]: time="2025-09-13T00:54:53.424263010Z" level=info msg="RemovePodSandbox for \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" Sep 13 00:54:53.424362 env[1303]: time="2025-09-13T00:54:53.424307725Z" level=info msg="Forcibly stopping sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\"" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.457 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0", GenerateName:"calico-kube-controllers-798cbcbbb6-", Namespace:"calico-system", SelfLink:"", UID:"e8351b28-7c27-4f21-ad76-83e2e206ba63", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798cbcbbb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19a08b852c987693d11f575b84b42d9ff0188f07e3eca2134effeb5624d2654f", Pod:"calico-kube-controllers-798cbcbbb6-8gg8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5297e5d61b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.458 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.458 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" iface="eth0" netns="" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.458 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.458 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.478 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.479 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.479 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.484 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.484 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" HandleID="k8s-pod-network.047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Workload="localhost-k8s-calico--kube--controllers--798cbcbbb6--8gg8t-eth0" Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.486 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.490436 env[1303]: 2025-09-13 00:54:53.488 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35" Sep 13 00:54:53.491024 env[1303]: time="2025-09-13T00:54:53.490445684Z" level=info msg="TearDown network for sandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" successfully" Sep 13 00:54:53.494251 env[1303]: time="2025-09-13T00:54:53.494190786Z" level=info msg="RemovePodSandbox \"047ed739459f694812b73f3d77294d1499443d3640c559ca8f162facb7e40c35\" returns successfully" Sep 13 00:54:53.494791 env[1303]: time="2025-09-13T00:54:53.494748453Z" level=info msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.526 [WARNING][5498] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzvl6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad838603-c026-4e41-bf47-8168df866652", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66", Pod:"csi-node-driver-wzvl6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7cf627950f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.527 [INFO][5498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.527 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" iface="eth0" netns="" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.527 [INFO][5498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.527 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.545 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.545 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.545 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.552 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.552 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.554 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.559099 env[1303]: 2025-09-13 00:54:53.557 [INFO][5498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.559822 env[1303]: time="2025-09-13T00:54:53.559123703Z" level=info msg="TearDown network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" successfully" Sep 13 00:54:53.559822 env[1303]: time="2025-09-13T00:54:53.559154331Z" level=info msg="StopPodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" returns successfully" Sep 13 00:54:53.559822 env[1303]: time="2025-09-13T00:54:53.559653487Z" level=info msg="RemovePodSandbox for \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" Sep 13 00:54:53.559822 env[1303]: time="2025-09-13T00:54:53.559703491Z" level=info msg="Forcibly stopping sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\"" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.593 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzvl6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad838603-c026-4e41-bf47-8168df866652", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfaab7f7b72faa74a909468975c72cd03e0a93134e659cd5461935945f209c66", Pod:"csi-node-driver-wzvl6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7cf627950f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.594 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.594 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" iface="eth0" netns="" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.594 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.594 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.613 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.613 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.613 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.620 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.620 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" HandleID="k8s-pod-network.aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Workload="localhost-k8s-csi--node--driver--wzvl6-eth0" Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.622 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:53.626746 env[1303]: 2025-09-13 00:54:53.624 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb" Sep 13 00:54:53.626746 env[1303]: time="2025-09-13T00:54:53.626708078Z" level=info msg="TearDown network for sandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" successfully" Sep 13 00:54:53.630407 env[1303]: time="2025-09-13T00:54:53.630362280Z" level=info msg="RemovePodSandbox \"aa853d11413bbf0a66189f530ac4d4aa309feab0293f51b47a514b835486bebb\" returns successfully" Sep 13 00:54:57.920009 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:54982.service. Sep 13 00:54:57.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.131:22-10.0.0.1:54982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:57.921402 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:54:57.921461 kernel: audit: type=1130 audit(1757724897.919:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.131:22-10.0.0.1:54982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:57.967000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.968541 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 54982 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:57.971000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.972786 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:57.976778 kernel: audit: type=1101 audit(1757724897.967:498): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.976824 kernel: audit: type=1103 audit(1757724897.971:499): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.976853 kernel: audit: type=1006 audit(1757724897.971:500): pid=5545 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:54:57.976910 systemd-logind[1289]: New session 16 of user core. Sep 13 00:54:57.977383 systemd[1]: Started session-16.scope. Sep 13 00:54:57.978798 kernel: audit: type=1300 audit(1757724897.971:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe08ed950 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:57.971000 audit[5545]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe08ed950 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:57.982418 kernel: audit: type=1327 audit(1757724897.971:500): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:57.971000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:57.981000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.987953 kernel: audit: type=1105 audit(1757724897.981:501): pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.988016 kernel: audit: type=1103 audit(1757724897.983:502): pid=5548 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:57.983000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:58.184519 sshd[5545]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:58.185000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:58.187124 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:54982.service: Deactivated successfully. Sep 13 00:54:58.188269 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:54:58.188315 systemd-logind[1289]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:54:58.189480 systemd-logind[1289]: Removed session 16. Sep 13 00:54:58.185000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:58.194495 kernel: audit: type=1106 audit(1757724898.185:503): pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:58.194545 kernel: audit: type=1104 audit(1757724898.185:504): pid=5545 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:54:58.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.131:22-10.0.0.1:54982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:03.040622 kubelet[2119]: E0913 00:55:03.040544 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:03.187775 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:59340.service. Sep 13 00:55:03.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.131:22-10.0.0.1:59340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:03.188993 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:03.189113 kernel: audit: type=1130 audit(1757724903.187:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.131:22-10.0.0.1:59340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:03.226000 audit[5562]: USER_ACCT pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.227435 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 59340 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:03.229122 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:03.228000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.233171 systemd-logind[1289]: New session 17 of user core. Sep 13 00:55:03.234015 systemd[1]: Started session-17.scope. Sep 13 00:55:03.235330 kernel: audit: type=1101 audit(1757724903.226:507): pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.235399 kernel: audit: type=1103 audit(1757724903.228:508): pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.235426 kernel: audit: type=1006 audit(1757724903.228:509): pid=5562 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 00:55:03.228000 audit[5562]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc397d0f0 a2=3 a3=0 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:03.242140 kernel: audit: type=1300 audit(1757724903.228:509): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc397d0f0 a2=3 a3=0 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:03.242197 kernel: audit: type=1327 audit(1757724903.228:509): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:03.228000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:03.237000 audit[5562]: USER_START pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.247946 kernel: audit: type=1105 audit(1757724903.237:510): pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.248001 kernel: audit: type=1103 audit(1757724903.239:511): pid=5565 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.239000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.345910 sshd[5562]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:03.346000 audit[5562]: USER_END pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.348648 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:59340.service: Deactivated successfully. Sep 13 00:55:03.349795 systemd-logind[1289]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:55:03.349883 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:55:03.350872 systemd-logind[1289]: Removed session 17. Sep 13 00:55:03.346000 audit[5562]: CRED_DISP pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.355819 kernel: audit: type=1106 audit(1757724903.346:512): pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.355873 kernel: audit: type=1104 audit(1757724903.346:513): pid=5562 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:03.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.131:22-10.0.0.1:59340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:05.053429 systemd[1]: run-containerd-runc-k8s.io-203fd7759f84f2fec55b5a1bf6297c1ac1a67a3003576b0ecb440e3eb8bed784-runc.my0OAv.mount: Deactivated successfully. Sep 13 00:55:05.128000 audit[5617]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:05.128000 audit[5617]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffec089ebf0 a2=0 a3=7ffec089ebdc items=0 ppid=2266 pid=5617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:05.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:05.134000 audit[5617]: NETFILTER_CFG table=nat:127 family=2 entries=31 op=nft_register_chain pid=5617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:05.134000 audit[5617]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffec089ebf0 a2=0 a3=7ffec089ebdc items=0 ppid=2266 pid=5617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:05.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:06.058373 kubelet[2119]: E0913 00:55:06.058327 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:08.349186 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:59346.service. Sep 13 00:55:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:08.350535 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:55:08.350613 kernel: audit: type=1130 audit(1757724908.347:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:08.430000 audit[5639]: USER_ACCT pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.431971 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:08.433990 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:08.432000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.437860 systemd-logind[1289]: New session 18 of user core. Sep 13 00:55:08.438577 systemd[1]: Started session-18.scope. Sep 13 00:55:08.438952 kernel: audit: type=1101 audit(1757724908.430:518): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.439020 kernel: audit: type=1103 audit(1757724908.432:519): pid=5639 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.439063 kernel: audit: type=1006 audit(1757724908.432:520): pid=5639 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 13 00:55:08.432000 audit[5639]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03cc2320 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:08.444882 kernel: audit: type=1300 audit(1757724908.432:520): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03cc2320 a2=3 a3=0 items=0 ppid=1 pid=5639 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:08.444948 kernel: audit: type=1327 audit(1757724908.432:520): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:08.432000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:08.442000 audit[5639]: USER_START pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.450195 kernel: audit: type=1105 audit(1757724908.442:521): pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.450238 kernel: audit: type=1103 audit(1757724908.443:522): pid=5642 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.443000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.574700 sshd[5639]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:08.574000 audit[5639]: USER_END pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.577344 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:59346.service: Deactivated successfully. Sep 13 00:55:08.578455 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:55:08.578869 systemd-logind[1289]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:55:08.579603 systemd-logind[1289]: Removed session 18. Sep 13 00:55:08.574000 audit[5639]: CRED_DISP pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.584072 kernel: audit: type=1106 audit(1757724908.574:523): pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.584128 kernel: audit: type=1104 audit(1757724908.574:524): pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:08.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.131:22-10.0.0.1:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:13.578695 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:42198.service. Sep 13 00:55:13.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:42198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:13.580107 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:13.585649 kernel: audit: type=1130 audit(1757724913.577:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:42198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:13.626000 audit[5660]: USER_ACCT pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.628494 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 42198 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:13.630000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.632807 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:13.636482 kernel: audit: type=1101 audit(1757724913.626:527): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.636616 kernel: audit: type=1103 audit(1757724913.630:528): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.636641 kernel: audit: type=1006 audit(1757724913.630:529): pid=5660 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 13 00:55:13.636840 systemd-logind[1289]: New session 19 of user core. Sep 13 00:55:13.637554 systemd[1]: Started session-19.scope. Sep 13 00:55:13.630000 audit[5660]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3ca19f80 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.643713 kernel: audit: type=1300 audit(1757724913.630:529): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3ca19f80 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.643769 kernel: audit: type=1327 audit(1757724913.630:529): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:13.630000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:13.641000 audit[5660]: USER_START pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.649324 kernel: audit: type=1105 audit(1757724913.641:530): pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.649370 kernel: audit: type=1103 audit(1757724913.643:531): pid=5663 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.643000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.822616 sshd[5660]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:13.821000 audit[5660]: USER_END pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.822000 audit[5660]: CRED_DISP pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.825801 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:42204.service. Sep 13 00:55:13.826432 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:42198.service: Deactivated successfully. Sep 13 00:55:13.828841 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:55:13.829088 systemd-logind[1289]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:55:13.830280 systemd-logind[1289]: Removed session 19. Sep 13 00:55:13.832849 kernel: audit: type=1106 audit(1757724913.821:532): pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.832921 kernel: audit: type=1104 audit(1757724913.822:533): pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.131:22-10.0.0.1:42204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:13.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.131:22-10.0.0.1:42198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:13.864000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.865886 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 42204 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:13.865000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.865000 audit[5672]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaddace90 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.865000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:13.866886 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:13.870301 systemd-logind[1289]: New session 20 of user core. Sep 13 00:55:13.871325 systemd[1]: Started session-20.scope. Sep 13 00:55:13.874000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:13.875000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.121919 sshd[5672]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:14.121000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.121000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.124715 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:42214.service. Sep 13 00:55:14.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.131:22-10.0.0.1:42214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:14.125250 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:42204.service: Deactivated successfully. Sep 13 00:55:14.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.131:22-10.0.0.1:42204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:14.126179 systemd-logind[1289]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:55:14.126295 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:55:14.127132 systemd-logind[1289]: Removed session 20. Sep 13 00:55:14.164000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.166127 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 42214 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:14.165000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.165000 audit[5685]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7ef12320 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:14.165000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:14.167006 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:14.170459 systemd-logind[1289]: New session 21 of user core. Sep 13 00:55:14.171379 systemd[1]: Started session-21.scope. Sep 13 00:55:14.174000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:14.175000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.574000 audit[5703]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.574000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffe0f1cec50 a2=0 a3=7ffe0f1cec3c items=0 ppid=2266 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.585000 audit[5703]: NETFILTER_CFG table=nat:129 family=2 entries=26 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.585000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffe0f1cec50 a2=0 a3=0 items=0 ppid=2266 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.595307 sshd[5685]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:15.597243 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:42222.service. Sep 13 00:55:15.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.131:22-10.0.0.1:42222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:15.596000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.596000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.131:22-10.0.0.1:42214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:15.599871 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:42214.service: Deactivated successfully. Sep 13 00:55:15.601839 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:55:15.602367 systemd-logind[1289]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:55:15.603420 systemd-logind[1289]: Removed session 21. Sep 13 00:55:15.605000 audit[5708]: NETFILTER_CFG table=filter:130 family=2 entries=32 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.605000 audit[5708]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffe3b539910 a2=0 a3=7ffe3b5398fc items=0 ppid=2266 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.613000 audit[5708]: NETFILTER_CFG table=nat:131 family=2 entries=26 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.613000 audit[5708]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffe3b539910 a2=0 a3=0 items=0 ppid=2266 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.643000 audit[5704]: USER_ACCT pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.645289 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:15.644000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.644000 audit[5704]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe151b630 a2=3 a3=0 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.644000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:15.646495 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:15.650312 systemd-logind[1289]: New session 22 of user core. Sep 13 00:55:15.651102 systemd[1]: Started session-22.scope. Sep 13 00:55:15.654000 audit[5704]: USER_START pid=5704 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:15.655000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.005828 sshd[5704]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.005000 audit[5704]: USER_END pid=5704 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.005000 audit[5704]: CRED_DISP pid=5704 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.008311 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:42224.service. Sep 13 00:55:16.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.131:22-10.0.0.1:42224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.009241 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:42222.service: Deactivated successfully. Sep 13 00:55:16.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.131:22-10.0.0.1:42222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.010402 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:55:16.011107 systemd-logind[1289]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:55:16.012086 systemd-logind[1289]: Removed session 22. Sep 13 00:55:16.049000 audit[5719]: USER_ACCT pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.051399 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 42224 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:16.050000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.050000 audit[5719]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc41ca6040 a2=3 a3=0 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:16.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:16.052748 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:16.056436 systemd-logind[1289]: New session 23 of user core. Sep 13 00:55:16.057148 systemd[1]: Started session-23.scope. Sep 13 00:55:16.061000 audit[5719]: USER_START pid=5719 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.063000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.166629 sshd[5719]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.166000 audit[5719]: USER_END pid=5719 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.166000 audit[5719]: CRED_DISP pid=5719 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:16.169228 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:42224.service: Deactivated successfully. Sep 13 00:55:16.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.131:22-10.0.0.1:42224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.170273 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:55:16.170435 systemd-logind[1289]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:55:16.171346 systemd-logind[1289]: Removed session 23. Sep 13 00:55:21.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.131:22-10.0.0.1:50144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:21.170228 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:50144.service. Sep 13 00:55:21.197639 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 13 00:55:21.197712 kernel: audit: type=1130 audit(1757724921.169:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.131:22-10.0.0.1:50144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:21.231000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.231961 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 50144 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:21.233835 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:21.237948 systemd-logind[1289]: New session 24 of user core. Sep 13 00:55:21.238256 systemd[1]: Started session-24.scope. Sep 13 00:55:21.232000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.292636 kernel: audit: type=1101 audit(1757724921.231:576): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.292754 kernel: audit: type=1103 audit(1757724921.232:577): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.295043 kernel: audit: type=1006 audit(1757724921.232:578): pid=5735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:55:21.295202 kernel: audit: type=1300 audit(1757724921.232:578): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce56adf50 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:21.232000 audit[5735]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce56adf50 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:21.298832 kernel: audit: type=1327 audit(1757724921.232:578): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:21.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:21.242000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.304718 kernel: audit: type=1105 audit(1757724921.242:579): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.305006 kernel: audit: type=1103 audit(1757724921.243:580): pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.243000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.443866 sshd[5735]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:21.444000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.447417 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:50144.service: Deactivated successfully. Sep 13 00:55:21.448285 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:55:21.444000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.451316 systemd-logind[1289]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:55:21.452184 systemd-logind[1289]: Removed session 24. Sep 13 00:55:21.454109 kernel: audit: type=1106 audit(1757724921.444:581): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.454169 kernel: audit: type=1104 audit(1757724921.444:582): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:21.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.131:22-10.0.0.1:50144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:21.981000 audit[5750]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:21.981000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffe57231d0 a2=0 a3=7fffe57231bc items=0 ppid=2266 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:21.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:21.986000 audit[5750]: NETFILTER_CFG table=nat:133 family=2 entries=110 op=nft_register_chain pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:21.986000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fffe57231d0 a2=0 a3=7fffe57231bc items=0 ppid=2266 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:21.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:23.469989 systemd[1]: run-containerd-runc-k8s.io-4957b8bdb65068fed31cf8365ce9bfe04f3bd57f7a2bcce341e3c28309a39e7b-runc.AP9BtM.mount: Deactivated successfully. Sep 13 00:55:26.039575 kubelet[2119]: E0913 00:55:26.039519 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:26.447396 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:50150.service. Sep 13 00:55:26.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.131:22-10.0.0.1:50150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:26.448728 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:55:26.448795 kernel: audit: type=1130 audit(1757724926.447:586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.131:22-10.0.0.1:50150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:26.488000 audit[5772]: USER_ACCT pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.489583 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 50150 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:26.492000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.493621 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:26.496756 kernel: audit: type=1101 audit(1757724926.488:587): pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.496815 kernel: audit: type=1103 audit(1757724926.492:588): pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.496833 kernel: audit: type=1006 audit(1757724926.492:589): pid=5772 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 00:55:26.497608 systemd-logind[1289]: New session 25 of user core. Sep 13 00:55:26.497814 systemd[1]: Started session-25.scope. Sep 13 00:55:26.498961 kernel: audit: type=1300 audit(1757724926.492:589): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe15f6f800 a2=3 a3=0 items=0 ppid=1 pid=5772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:26.492000 audit[5772]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe15f6f800 a2=3 a3=0 items=0 ppid=1 pid=5772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:26.492000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:26.504023 kernel: audit: type=1327 audit(1757724926.492:589): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:26.504069 kernel: audit: type=1105 audit(1757724926.501:590): pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.501000 audit[5772]: USER_START pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.502000 audit[5775]: CRED_ACQ pid=5775 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.511588 kernel: audit: type=1103 audit(1757724926.502:591): pid=5775 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.680654 sshd[5772]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:26.681000 audit[5772]: USER_END pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.683752 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:50150.service: Deactivated successfully. Sep 13 00:55:26.686266 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:55:26.687267 kernel: audit: type=1106 audit(1757724926.681:592): pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.686917 systemd-logind[1289]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:55:26.681000 audit[5772]: CRED_DISP pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:26.688444 systemd-logind[1289]: Removed session 25. Sep 13 00:55:26.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.131:22-10.0.0.1:50150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:26.691590 kernel: audit: type=1104 audit(1757724926.681:593): pid=5772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:28.039743 kubelet[2119]: E0913 00:55:28.039704 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:30.861683 systemd[1]: run-containerd-runc-k8s.io-203fd7759f84f2fec55b5a1bf6297c1ac1a67a3003576b0ecb440e3eb8bed784-runc.rBZw10.mount: Deactivated successfully. Sep 13 00:55:31.684244 systemd[1]: Started sshd@25-10.0.0.131:22-10.0.0.1:48628.service. Sep 13 00:55:31.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.131:22-10.0.0.1:48628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:31.685299 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:31.685444 kernel: audit: type=1130 audit(1757724931.683:595): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.131:22-10.0.0.1:48628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:31.730000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.731159 sshd[5809]: Accepted publickey for core from 10.0.0.1 port 48628 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:31.734000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.735343 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:31.739117 kernel: audit: type=1101 audit(1757724931.730:596): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.739174 kernel: audit: type=1103 audit(1757724931.734:597): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.739198 kernel: audit: type=1006 audit(1757724931.734:598): pid=5809 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 00:55:31.740244 systemd[1]: Started session-26.scope. Sep 13 00:55:31.734000 audit[5809]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4f282770 a2=3 a3=0 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:31.741474 systemd-logind[1289]: New session 26 of user core. Sep 13 00:55:31.747606 kernel: audit: type=1300 audit(1757724931.734:598): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4f282770 a2=3 a3=0 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:31.747989 kernel: audit: type=1327 audit(1757724931.734:598): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:31.734000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:31.749000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.750000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.759880 kernel: audit: type=1105 audit(1757724931.749:599): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.760015 kernel: audit: type=1103 audit(1757724931.750:600): pid=5812 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.937171 sshd[5809]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:31.946481 kernel: audit: type=1106 audit(1757724931.937:601): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.946576 kernel: audit: type=1104 audit(1757724931.937:602): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.937000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.937000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:31.943627 systemd[1]: sshd@25-10.0.0.131:22-10.0.0.1:48628.service: Deactivated successfully. Sep 13 00:55:31.944992 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:55:31.945544 systemd-logind[1289]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:55:31.946319 systemd-logind[1289]: Removed session 26. Sep 13 00:55:31.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.131:22-10.0.0.1:48628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:34.039893 kubelet[2119]: E0913 00:55:34.039853 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:36.940104 systemd[1]: Started sshd@26-10.0.0.131:22-10.0.0.1:48644.service. Sep 13 00:55:36.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.131:22-10.0.0.1:48644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:36.941551 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:36.941618 kernel: audit: type=1130 audit(1757724936.939:604): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.131:22-10.0.0.1:48644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:36.986000 audit[5885]: USER_ACCT pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:36.987625 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 48644 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:36.992000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:36.993398 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:36.996833 kernel: audit: type=1101 audit(1757724936.986:605): pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:36.996887 kernel: audit: type=1103 audit(1757724936.992:606): pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:36.996959 kernel: audit: type=1006 audit(1757724936.992:607): pid=5885 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 00:55:36.998013 systemd[1]: Started session-27.scope. Sep 13 00:55:36.998351 systemd-logind[1289]: New session 27 of user core. Sep 13 00:55:36.999539 kernel: audit: type=1300 audit(1757724936.992:607): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4be24380 a2=3 a3=0 items=0 ppid=1 pid=5885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:36.992000 audit[5885]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4be24380 a2=3 a3=0 items=0 ppid=1 pid=5885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:36.992000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:37.005238 kernel: audit: type=1327 audit(1757724936.992:607): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:37.005298 kernel: audit: type=1105 audit(1757724937.003:608): pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.003000 audit[5885]: USER_START pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.005000 audit[5888]: CRED_ACQ pid=5888 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.015589 kernel: audit: type=1103 audit(1757724937.005:609): pid=5888 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.166896 sshd[5885]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:37.167000 audit[5885]: USER_END pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.167000 audit[5885]: CRED_DISP pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.176183 kernel: audit: type=1106 audit(1757724937.167:610): pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.176360 kernel: audit: type=1104 audit(1757724937.167:611): pid=5885 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:55:37.176634 systemd[1]: sshd@26-10.0.0.131:22-10.0.0.1:48644.service: Deactivated successfully. Sep 13 00:55:37.177404 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:55:37.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.131:22-10.0.0.1:48644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:37.178227 systemd-logind[1289]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:55:37.179339 systemd-logind[1289]: Removed session 27.