Aug 13 01:06:11.106917 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:06:11.106939 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:06:11.106947 kernel: BIOS-provided physical RAM map: Aug 13 01:06:11.106953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 01:06:11.106958 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 01:06:11.106964 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:06:11.106970 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 01:06:11.106976 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 01:06:11.106983 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:06:11.106989 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:06:11.106994 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:06:11.107000 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:06:11.107006 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:06:11.107011 kernel: NX (Execute Disable) protection: active Aug 13 01:06:11.107020 kernel: SMBIOS 2.8 present. Aug 13 01:06:11.107026 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 01:06:11.107032 kernel: Hypervisor detected: KVM Aug 13 01:06:11.107038 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:06:11.107046 kernel: kvm-clock: cpu 0, msr 1d19e001, primary cpu clock Aug 13 01:06:11.107052 kernel: kvm-clock: using sched offset of 2879808974 cycles Aug 13 01:06:11.107059 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:06:11.107065 kernel: tsc: Detected 2794.750 MHz processor Aug 13 01:06:11.107072 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:06:11.107079 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:06:11.107086 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 01:06:11.107092 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:06:11.107098 kernel: Using GB pages for direct mapping Aug 13 01:06:11.107104 kernel: ACPI: Early table checksum verification disabled Aug 13 01:06:11.107110 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 01:06:11.107117 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107123 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107129 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107136 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 01:06:11.107142 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107149 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107156 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107163 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:06:11.107171 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 01:06:11.107180 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 01:06:11.107187 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 01:06:11.107200 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 01:06:11.107209 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 01:06:11.107217 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 01:06:11.107223 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 01:06:11.107230 kernel: No NUMA configuration found Aug 13 01:06:11.107237 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 01:06:11.107245 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 01:06:11.107251 kernel: Zone ranges: Aug 13 01:06:11.107258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:06:11.107264 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 01:06:11.107271 kernel: Normal empty Aug 13 01:06:11.107277 kernel: Movable zone start for each node Aug 13 01:06:11.107284 kernel: Early memory node ranges Aug 13 01:06:11.107290 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:06:11.107297 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 01:06:11.107303 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 01:06:11.107314 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:06:11.107321 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:06:11.107328 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 01:06:11.107334 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:06:11.107341 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:06:11.107348 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:06:11.107354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:06:11.107361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:06:11.107367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:06:11.107378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:06:11.107384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:06:11.107391 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:06:11.107398 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:06:11.107404 kernel: TSC deadline timer available Aug 13 01:06:11.107411 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 01:06:11.107417 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:06:11.107424 kernel: kvm-guest: setup PV sched yield Aug 13 01:06:11.107430 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:06:11.107438 kernel: Booting paravirtualized kernel on KVM Aug 13 01:06:11.107445 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:06:11.107452 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 01:06:11.107459 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 01:06:11.107465 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 01:06:11.107472 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 01:06:11.107478 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 01:06:11.107485 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Aug 13 01:06:11.107491 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:06:11.107499 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:06:11.107506 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 01:06:11.107512 kernel: Policy zone: DMA32 Aug 13 01:06:11.107520 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:06:11.107527 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:06:11.107533 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:06:11.107540 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:06:11.107547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:06:11.107555 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 134796K reserved, 0K cma-reserved) Aug 13 01:06:11.107562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 01:06:11.107568 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:06:11.107575 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:06:11.107601 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:06:11.107608 kernel: rcu: RCU event tracing is enabled. Aug 13 01:06:11.107615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 01:06:11.107621 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:06:11.107628 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:06:11.107637 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:06:11.107643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 01:06:11.107650 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 01:06:11.107656 kernel: random: crng init done Aug 13 01:06:11.107663 kernel: Console: colour VGA+ 80x25 Aug 13 01:06:11.107669 kernel: printk: console [ttyS0] enabled Aug 13 01:06:11.107676 kernel: ACPI: Core revision 20210730 Aug 13 01:06:11.107683 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:06:11.107689 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:06:11.107697 kernel: x2apic enabled Aug 13 01:06:11.107704 kernel: Switched APIC routing to physical x2apic. Aug 13 01:06:11.107713 kernel: kvm-guest: setup PV IPIs Aug 13 01:06:11.107719 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:06:11.107726 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 01:06:11.107735 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 01:06:11.107742 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:06:11.107748 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:06:11.107755 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:06:11.107768 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:06:11.107775 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:06:11.107782 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:06:11.107791 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 01:06:11.107798 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 01:06:11.107805 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:06:11.107821 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 01:06:11.107828 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:06:11.107836 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:06:11.107844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:06:11.107851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:06:11.107858 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 01:06:11.107866 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:06:11.107872 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:06:11.107880 kernel: LSM: Security Framework initializing Aug 13 01:06:11.107887 kernel: SELinux: Initializing. Aug 13 01:06:11.107894 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:06:11.107902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:06:11.107909 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 01:06:11.107916 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:06:11.107923 kernel: ... version: 0 Aug 13 01:06:11.107930 kernel: ... bit width: 48 Aug 13 01:06:11.107937 kernel: ... generic registers: 6 Aug 13 01:06:11.107944 kernel: ... value mask: 0000ffffffffffff Aug 13 01:06:11.107951 kernel: ... max period: 00007fffffffffff Aug 13 01:06:11.107958 kernel: ... fixed-purpose events: 0 Aug 13 01:06:11.107966 kernel: ... event mask: 000000000000003f Aug 13 01:06:11.107973 kernel: signal: max sigframe size: 1776 Aug 13 01:06:11.107980 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:06:11.107986 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:06:11.107993 kernel: x86: Booting SMP configuration: Aug 13 01:06:11.108000 kernel: .... node #0, CPUs: #1 Aug 13 01:06:11.108007 kernel: kvm-clock: cpu 1, msr 1d19e041, secondary cpu clock Aug 13 01:06:11.108014 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 01:06:11.108021 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Aug 13 01:06:11.108029 kernel: #2 Aug 13 01:06:11.108036 kernel: kvm-clock: cpu 2, msr 1d19e081, secondary cpu clock Aug 13 01:06:11.108043 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 01:06:11.108050 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Aug 13 01:06:11.108056 kernel: #3 Aug 13 01:06:11.108066 kernel: kvm-clock: cpu 3, msr 1d19e0c1, secondary cpu clock Aug 13 01:06:11.108073 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 01:06:11.108080 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Aug 13 01:06:11.108087 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 01:06:11.108095 kernel: smpboot: Max logical packages: 1 Aug 13 01:06:11.108102 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 01:06:11.108109 kernel: devtmpfs: initialized Aug 13 01:06:11.108116 kernel: x86/mm: Memory block size: 128MB Aug 13 01:06:11.108123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:06:11.108130 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 01:06:11.108137 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:06:11.108144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:06:11.108151 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:06:11.108159 kernel: audit: type=2000 audit(1755047170.424:1): state=initialized audit_enabled=0 res=1 Aug 13 01:06:11.108166 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:06:11.108173 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:06:11.108180 kernel: cpuidle: using governor menu Aug 13 01:06:11.108187 kernel: ACPI: bus type PCI registered Aug 13 01:06:11.108194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:06:11.108200 kernel: dca service started, version 1.12.1 Aug 13 01:06:11.108208 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 01:06:11.108215 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 01:06:11.108222 kernel: PCI: Using configuration type 1 for base access Aug 13 01:06:11.108230 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:06:11.108237 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:06:11.108244 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:06:11.108251 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:06:11.108258 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:06:11.108265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:06:11.108272 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:06:11.108279 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:06:11.108286 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:06:11.108294 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:06:11.108301 kernel: ACPI: Interpreter enabled Aug 13 01:06:11.108308 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:06:11.108315 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:06:11.108322 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:06:11.108329 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:06:11.108336 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:06:11.108502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:06:11.108597 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:06:11.108674 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:06:11.108683 kernel: PCI host bridge to bus 0000:00 Aug 13 01:06:11.108774 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:06:11.108852 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:06:11.108919 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:06:11.108985 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 01:06:11.109055 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:06:11.109121 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 01:06:11.109188 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:06:11.109290 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 01:06:11.109384 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 01:06:11.109461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 01:06:11.109539 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 01:06:11.109627 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:06:11.109702 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:06:11.109795 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 01:06:11.109881 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 01:06:11.109963 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 01:06:11.110037 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:06:11.110125 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 01:06:11.110206 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 01:06:11.110323 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 01:06:11.110408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:06:11.111636 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 01:06:11.111735 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 01:06:11.111833 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 01:06:11.111924 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 01:06:11.112006 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:06:11.112104 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 01:06:11.112194 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:06:11.112296 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 01:06:11.112448 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 01:06:11.112541 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 01:06:11.112655 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 01:06:11.112741 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 01:06:11.112752 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:06:11.112761 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:06:11.112769 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:06:11.112778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:06:11.112786 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:06:11.112798 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:06:11.112806 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:06:11.112823 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:06:11.112832 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:06:11.112840 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:06:11.112848 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:06:11.112857 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:06:11.112865 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:06:11.112873 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:06:11.112883 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:06:11.112892 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:06:11.112900 kernel: iommu: Default domain type: Translated Aug 13 01:06:11.112908 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:06:11.112995 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:06:11.113079 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:06:11.113163 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:06:11.113174 kernel: vgaarb: loaded Aug 13 01:06:11.113182 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:06:11.113193 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:06:11.113202 kernel: PTP clock support registered Aug 13 01:06:11.113210 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:06:11.113219 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:06:11.113227 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 01:06:11.113236 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 01:06:11.113244 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:06:11.113253 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:06:11.113261 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:06:11.113271 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:06:11.113280 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:06:11.113288 kernel: pnp: PnP ACPI init Aug 13 01:06:11.113394 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:06:11.113406 kernel: pnp: PnP ACPI: found 6 devices Aug 13 01:06:11.113415 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:06:11.113423 kernel: NET: Registered PF_INET protocol family Aug 13 01:06:11.113432 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:06:11.113443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:06:11.113452 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:06:11.113460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:06:11.113469 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 01:06:11.113477 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:06:11.113486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:06:11.113494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:06:11.113503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:06:11.113511 kernel: NET: Registered PF_XDP protocol family Aug 13 01:06:11.113603 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:06:11.113681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:06:11.113756 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:06:11.113839 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 01:06:11.113914 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:06:11.113987 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 01:06:11.113997 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:06:11.114006 kernel: Initialise system trusted keyrings Aug 13 01:06:11.114017 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:06:11.114026 kernel: Key type asymmetric registered Aug 13 01:06:11.114034 kernel: Asymmetric key parser 'x509' registered Aug 13 01:06:11.114042 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:06:11.114051 kernel: io scheduler mq-deadline registered Aug 13 01:06:11.114059 kernel: io scheduler kyber registered Aug 13 01:06:11.114068 kernel: io scheduler bfq registered Aug 13 01:06:11.114078 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:06:11.114088 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:06:11.114100 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:06:11.114108 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 01:06:11.114117 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:06:11.114125 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:06:11.114134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:06:11.114142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:06:11.114150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:06:11.114246 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 01:06:11.114258 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:06:11.114337 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 01:06:11.114413 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T01:06:10 UTC (1755047170) Aug 13 01:06:11.114490 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:06:11.114501 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:06:11.114510 kernel: Segment Routing with IPv6 Aug 13 01:06:11.114518 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:06:11.114526 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:06:11.114534 kernel: Key type dns_resolver registered Aug 13 01:06:11.114545 kernel: IPI shorthand broadcast: enabled Aug 13 01:06:11.114553 kernel: sched_clock: Marking stable (406097448, 99681262)->(568753653, -62974943) Aug 13 01:06:11.114562 kernel: registered taskstats version 1 Aug 13 01:06:11.114570 kernel: Loading compiled-in X.509 certificates Aug 13 01:06:11.114607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:06:11.114640 kernel: Key type .fscrypt registered Aug 13 01:06:11.114649 kernel: Key type fscrypt-provisioning registered Aug 13 01:06:11.114658 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:06:11.114667 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:06:11.114678 kernel: ima: No architecture policies found Aug 13 01:06:11.114686 kernel: clk: Disabling unused clocks Aug 13 01:06:11.114695 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:06:11.114703 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:06:11.114712 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:06:11.114720 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:06:11.114729 kernel: Run /init as init process Aug 13 01:06:11.114737 kernel: with arguments: Aug 13 01:06:11.114745 kernel: /init Aug 13 01:06:11.114755 kernel: with environment: Aug 13 01:06:11.114763 kernel: HOME=/ Aug 13 01:06:11.114771 kernel: TERM=linux Aug 13 01:06:11.114779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:06:11.114790 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:06:11.114800 systemd[1]: Detected virtualization kvm. Aug 13 01:06:11.114818 systemd[1]: Detected architecture x86-64. Aug 13 01:06:11.114826 systemd[1]: Running in initrd. Aug 13 01:06:11.114836 systemd[1]: No hostname configured, using default hostname. Aug 13 01:06:11.114845 systemd[1]: Hostname set to . Aug 13 01:06:11.114854 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:06:11.114863 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:06:11.114872 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:06:11.114881 systemd[1]: Reached target cryptsetup.target. Aug 13 01:06:11.114890 systemd[1]: Reached target paths.target. Aug 13 01:06:11.114899 systemd[1]: Reached target slices.target. Aug 13 01:06:11.114909 systemd[1]: Reached target swap.target. Aug 13 01:06:11.114925 systemd[1]: Reached target timers.target. Aug 13 01:06:11.114936 systemd[1]: Listening on iscsid.socket. Aug 13 01:06:11.114950 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:06:11.114960 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:06:11.114971 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:06:11.114981 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:06:11.114990 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:06:11.114999 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:06:11.115008 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:06:11.115017 systemd[1]: Reached target sockets.target. Aug 13 01:06:11.115026 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:06:11.115035 systemd[1]: Finished network-cleanup.service. Aug 13 01:06:11.115045 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:06:11.115055 systemd[1]: Starting systemd-journald.service... Aug 13 01:06:11.115065 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:06:11.115074 systemd[1]: Starting systemd-resolved.service... Aug 13 01:06:11.115083 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:06:11.115092 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:06:11.115101 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:06:11.115111 kernel: audit: type=1130 audit(1755047171.109:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.115120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:06:11.115132 systemd-journald[199]: Journal started Aug 13 01:06:11.115180 systemd-journald[199]: Runtime Journal (/run/log/journal/29145594ad34418b9b01eace20374330) is 6.0M, max 48.5M, 42.5M free. Aug 13 01:06:11.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.112416 systemd-modules-load[200]: Inserted module 'overlay' Aug 13 01:06:11.127936 systemd-resolved[201]: Positive Trust Anchors: Aug 13 01:06:11.168181 systemd[1]: Started systemd-journald.service. Aug 13 01:06:11.168203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:06:11.168215 kernel: audit: type=1130 audit(1755047171.148:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.168226 kernel: audit: type=1130 audit(1755047171.148:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.168236 kernel: audit: type=1130 audit(1755047171.149:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.168247 kernel: audit: type=1130 audit(1755047171.161:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.127957 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:06:11.127984 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:06:11.130277 systemd-resolved[201]: Defaulting to hostname 'linux'. Aug 13 01:06:11.176791 kernel: Bridge firewalling registered Aug 13 01:06:11.149093 systemd[1]: Started systemd-resolved.service. Aug 13 01:06:11.149657 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:06:11.150075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:06:11.162340 systemd[1]: Reached target nss-lookup.target. Aug 13 01:06:11.166729 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:06:11.186162 kernel: audit: type=1130 audit(1755047171.180:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.176750 systemd-modules-load[200]: Inserted module 'br_netfilter' Aug 13 01:06:11.179998 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:06:11.182384 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:06:11.193078 dracut-cmdline[216]: dracut-dracut-053 Aug 13 01:06:11.195334 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:06:11.201603 kernel: SCSI subsystem initialized Aug 13 01:06:11.212607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:06:11.212631 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:06:11.214507 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:06:11.217223 systemd-modules-load[200]: Inserted module 'dm_multipath' Aug 13 01:06:11.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.223006 kernel: audit: type=1130 audit(1755047171.218:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.218411 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:06:11.220281 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:06:11.227520 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:06:11.231621 kernel: audit: type=1130 audit(1755047171.227:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.259611 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:06:11.280644 kernel: iscsi: registered transport (tcp) Aug 13 01:06:11.306639 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:06:11.306710 kernel: QLogic iSCSI HBA Driver Aug 13 01:06:11.339123 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:06:11.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.340178 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:06:11.345083 kernel: audit: type=1130 audit(1755047171.338:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.389651 kernel: raid6: avx2x4 gen() 30347 MB/s Aug 13 01:06:11.406635 kernel: raid6: avx2x4 xor() 7288 MB/s Aug 13 01:06:11.423622 kernel: raid6: avx2x2 gen() 27807 MB/s Aug 13 01:06:11.440618 kernel: raid6: avx2x2 xor() 15993 MB/s Aug 13 01:06:11.457616 kernel: raid6: avx2x1 gen() 25914 MB/s Aug 13 01:06:11.474616 kernel: raid6: avx2x1 xor() 15351 MB/s Aug 13 01:06:11.491611 kernel: raid6: sse2x4 gen() 14717 MB/s Aug 13 01:06:11.508611 kernel: raid6: sse2x4 xor() 7478 MB/s Aug 13 01:06:11.525617 kernel: raid6: sse2x2 gen() 15900 MB/s Aug 13 01:06:11.542622 kernel: raid6: sse2x2 xor() 7473 MB/s Aug 13 01:06:11.559615 kernel: raid6: sse2x1 gen() 12206 MB/s Aug 13 01:06:11.576959 kernel: raid6: sse2x1 xor() 7595 MB/s Aug 13 01:06:11.576981 kernel: raid6: using algorithm avx2x4 gen() 30347 MB/s Aug 13 01:06:11.576990 kernel: raid6: .... xor() 7288 MB/s, rmw enabled Aug 13 01:06:11.577639 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:06:11.590610 kernel: xor: automatically using best checksumming function avx Aug 13 01:06:11.680618 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:06:11.689643 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:06:11.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.690000 audit: BPF prog-id=7 op=LOAD Aug 13 01:06:11.690000 audit: BPF prog-id=8 op=LOAD Aug 13 01:06:11.691941 systemd[1]: Starting systemd-udevd.service... Aug 13 01:06:11.706043 systemd-udevd[400]: Using default interface naming scheme 'v252'. Aug 13 01:06:11.710331 systemd[1]: Started systemd-udevd.service. Aug 13 01:06:11.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.712304 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:06:11.724775 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 01:06:11.749022 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:06:11.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.793663 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:06:11.831830 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:06:11.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:11.866608 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 01:06:11.873036 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:06:11.873053 kernel: GPT:9289727 != 19775487 Aug 13 01:06:11.873066 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:06:11.873078 kernel: GPT:9289727 != 19775487 Aug 13 01:06:11.873090 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:06:11.873103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 01:06:11.875880 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:06:11.879606 kernel: libata version 3.00 loaded. Aug 13 01:06:11.888605 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:06:11.888629 kernel: AES CTR mode by8 optimization enabled Aug 13 01:06:11.888639 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:06:11.959005 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:06:11.959030 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 01:06:11.959160 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:06:11.959302 kernel: scsi host0: ahci Aug 13 01:06:11.959837 kernel: scsi host1: ahci Aug 13 01:06:11.959932 kernel: scsi host2: ahci Aug 13 01:06:11.960021 kernel: scsi host3: ahci Aug 13 01:06:11.960107 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (446) Aug 13 01:06:11.960117 kernel: scsi host4: ahci Aug 13 01:06:11.960202 kernel: scsi host5: ahci Aug 13 01:06:11.960322 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 01:06:11.960332 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 01:06:11.960341 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 01:06:11.960350 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 01:06:11.960359 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 01:06:11.960367 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 01:06:11.904380 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:06:11.992461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:06:12.000885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:06:12.007145 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:06:12.011006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:06:12.012551 systemd[1]: Starting disk-uuid.service... Aug 13 01:06:12.264994 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:06:12.265065 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:06:12.265077 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 01:06:12.266962 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:06:12.267605 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:06:12.268612 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 01:06:12.268636 kernel: ata3.00: applying bridge limits Aug 13 01:06:12.269898 kernel: ata3.00: configured for UDMA/100 Aug 13 01:06:12.270603 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 01:06:12.277295 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:06:12.334822 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 01:06:12.352242 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 01:06:12.352258 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 01:06:12.562849 disk-uuid[526]: Primary Header is updated. Aug 13 01:06:12.562849 disk-uuid[526]: Secondary Entries is updated. Aug 13 01:06:12.562849 disk-uuid[526]: Secondary Header is updated. Aug 13 01:06:12.566595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 01:06:12.570609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 01:06:12.573599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 01:06:13.573621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 01:06:13.573705 disk-uuid[542]: The operation has completed successfully. Aug 13 01:06:13.597977 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:06:13.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.598061 systemd[1]: Finished disk-uuid.service. Aug 13 01:06:13.607047 systemd[1]: Starting verity-setup.service... Aug 13 01:06:13.623613 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 01:06:13.642620 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:06:13.643934 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:06:13.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.645753 systemd[1]: Finished verity-setup.service. Aug 13 01:06:13.707606 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:06:13.707653 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:06:13.707832 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 01:06:13.708523 systemd[1]: Starting ignition-setup.service... Aug 13 01:06:13.710617 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:06:13.718810 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:06:13.718844 kernel: BTRFS info (device vda6): using free space tree Aug 13 01:06:13.718853 kernel: BTRFS info (device vda6): has skinny extents Aug 13 01:06:13.727723 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:06:13.736098 systemd[1]: Finished ignition-setup.service. Aug 13 01:06:13.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.737700 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:06:13.793103 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:06:13.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.794000 audit: BPF prog-id=9 op=LOAD Aug 13 01:06:13.795824 systemd[1]: Starting systemd-networkd.service... Aug 13 01:06:13.800327 ignition[640]: Ignition 2.14.0 Aug 13 01:06:13.800338 ignition[640]: Stage: fetch-offline Aug 13 01:06:13.800400 ignition[640]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:13.800409 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:13.800527 ignition[640]: parsed url from cmdline: "" Aug 13 01:06:13.800530 ignition[640]: no config URL provided Aug 13 01:06:13.800535 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:06:13.800542 ignition[640]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:06:13.800562 ignition[640]: op(1): [started] loading QEMU firmware config module Aug 13 01:06:13.800566 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 01:06:13.812499 ignition[640]: op(1): [finished] loading QEMU firmware config module Aug 13 01:06:13.823153 systemd-networkd[717]: lo: Link UP Aug 13 01:06:13.823165 systemd-networkd[717]: lo: Gained carrier Aug 13 01:06:13.823856 systemd-networkd[717]: Enumeration completed Aug 13 01:06:13.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.823950 systemd[1]: Started systemd-networkd.service. Aug 13 01:06:13.824328 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:06:13.825691 systemd-networkd[717]: eth0: Link UP Aug 13 01:06:13.825694 systemd-networkd[717]: eth0: Gained carrier Aug 13 01:06:13.826122 systemd[1]: Reached target network.target. Aug 13 01:06:13.828341 systemd[1]: Starting iscsiuio.service... Aug 13 01:06:13.849377 systemd[1]: Started iscsiuio.service. Aug 13 01:06:13.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.851299 systemd[1]: Starting iscsid.service... Aug 13 01:06:13.855285 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:06:13.855285 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:06:13.855285 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:06:13.855285 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:06:13.855285 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:06:13.855285 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:06:13.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.859945 systemd[1]: Started iscsid.service. Aug 13 01:06:13.863098 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:06:13.874178 ignition[640]: parsing config with SHA512: 943eeeab05c64a6f936398ee68ac90ba21baf5e49632c403739e576be7eddfe41baf100e0396489313fdeabd214799f2062fa14afc54f746e23b3003da5779a5 Aug 13 01:06:13.878526 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:06:13.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.879798 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:06:13.881343 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:06:13.881790 systemd[1]: Reached target remote-fs.target. Aug 13 01:06:13.884739 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 01:06:13.886239 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:06:13.915890 unknown[640]: fetched base config from "system" Aug 13 01:06:13.916092 unknown[640]: fetched user config from "qemu" Aug 13 01:06:13.916565 ignition[640]: fetch-offline: fetch-offline passed Aug 13 01:06:13.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.917563 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:06:13.916639 ignition[640]: Ignition finished successfully Aug 13 01:06:13.918923 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:06:13.920546 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 01:06:13.921439 systemd[1]: Starting ignition-kargs.service... Aug 13 01:06:13.931355 ignition[738]: Ignition 2.14.0 Aug 13 01:06:13.931364 ignition[738]: Stage: kargs Aug 13 01:06:13.931490 ignition[738]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:13.931499 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:13.933858 systemd[1]: Finished ignition-kargs.service. Aug 13 01:06:13.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.932684 ignition[738]: kargs: kargs passed Aug 13 01:06:13.932723 ignition[738]: Ignition finished successfully Aug 13 01:06:13.936536 systemd[1]: Starting ignition-disks.service... Aug 13 01:06:13.951223 ignition[744]: Ignition 2.14.0 Aug 13 01:06:13.951234 ignition[744]: Stage: disks Aug 13 01:06:13.951353 ignition[744]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:13.951364 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:13.953771 systemd[1]: Finished ignition-disks.service. Aug 13 01:06:13.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.952799 ignition[744]: disks: disks passed Aug 13 01:06:13.955622 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:06:13.952865 ignition[744]: Ignition finished successfully Aug 13 01:06:13.957182 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:06:13.958107 systemd[1]: Reached target local-fs.target. Aug 13 01:06:13.959563 systemd[1]: Reached target sysinit.target. Aug 13 01:06:13.959627 systemd[1]: Reached target basic.target. Aug 13 01:06:13.960754 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:06:13.977644 systemd-fsck[752]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 01:06:13.983443 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:06:13.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:13.986879 systemd[1]: Mounting sysroot.mount... Aug 13 01:06:13.995627 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:06:13.995892 systemd[1]: Mounted sysroot.mount. Aug 13 01:06:13.996016 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:06:13.999451 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:06:14.000521 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:06:14.000664 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:06:14.000695 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:06:14.009450 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:06:14.010928 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:06:14.018113 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:06:14.022812 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:06:14.027924 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:06:14.031945 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:06:14.057556 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:06:14.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:14.060048 systemd[1]: Starting ignition-mount.service... Aug 13 01:06:14.062558 systemd[1]: Starting sysroot-boot.service... Aug 13 01:06:14.065721 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 01:06:14.075649 ignition[804]: INFO : Ignition 2.14.0 Aug 13 01:06:14.075649 ignition[804]: INFO : Stage: mount Aug 13 01:06:14.077519 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:14.077519 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:14.077519 ignition[804]: INFO : mount: mount passed Aug 13 01:06:14.077519 ignition[804]: INFO : Ignition finished successfully Aug 13 01:06:14.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:14.078067 systemd[1]: Finished ignition-mount.service. Aug 13 01:06:14.089900 systemd[1]: Finished sysroot-boot.service. Aug 13 01:06:14.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:14.653691 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:06:14.664389 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Aug 13 01:06:14.664427 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:06:14.664441 kernel: BTRFS info (device vda6): using free space tree Aug 13 01:06:14.665166 kernel: BTRFS info (device vda6): has skinny extents Aug 13 01:06:14.670976 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:06:14.672922 systemd[1]: Starting ignition-files.service... Aug 13 01:06:14.687521 ignition[834]: INFO : Ignition 2.14.0 Aug 13 01:06:14.687521 ignition[834]: INFO : Stage: files Aug 13 01:06:14.689683 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:14.689683 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:14.689683 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:06:14.694038 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:06:14.694038 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:06:14.694038 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:06:14.694038 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:06:14.694038 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:06:14.693806 unknown[834]: wrote ssh authorized keys file for user: core Aug 13 01:06:14.702007 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:06:14.702007 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:06:14.702007 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:06:14.702007 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:06:14.754545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:06:15.033836 systemd-networkd[717]: eth0: Gained IPv6LL Aug 13 01:06:15.051225 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:06:15.053174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:06:15.423803 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:06:15.965404 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:06:15.967885 ignition[834]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 01:06:15.967885 ignition[834]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:06:15.971169 ignition[834]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:06:15.971169 ignition[834]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 01:06:15.971169 ignition[834]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 01:06:15.975628 ignition[834]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:06:15.975628 ignition[834]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:06:15.975628 ignition[834]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 01:06:15.975628 ignition[834]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 01:06:15.975628 ignition[834]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 01:06:15.983506 ignition[834]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:06:16.017716 ignition[834]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:06:16.019292 ignition[834]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 01:06:16.019292 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:06:16.019292 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:06:16.019292 ignition[834]: INFO : files: files passed Aug 13 01:06:16.019292 ignition[834]: INFO : Ignition finished successfully Aug 13 01:06:16.026262 systemd[1]: Finished ignition-files.service. Aug 13 01:06:16.031399 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 01:06:16.031423 kernel: audit: type=1130 audit(1755047176.026:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.031424 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:06:16.032330 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:06:16.033413 systemd[1]: Starting ignition-quench.service... Aug 13 01:06:16.036839 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:06:16.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.038865 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 01:06:16.046682 kernel: audit: type=1130 audit(1755047176.038:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.046700 kernel: audit: type=1131 audit(1755047176.038:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.046711 kernel: audit: type=1130 audit(1755047176.046:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.036932 systemd[1]: Finished ignition-quench.service. Aug 13 01:06:16.052067 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:06:16.041842 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:06:16.046745 systemd[1]: Reached target ignition-complete.target. Aug 13 01:06:16.051209 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:06:16.062873 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:06:16.062954 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:06:16.071745 kernel: audit: type=1130 audit(1755047176.064:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.071767 kernel: audit: type=1131 audit(1755047176.064:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.064783 systemd[1]: Reached target initrd-fs.target. Aug 13 01:06:16.071752 systemd[1]: Reached target initrd.target. Aug 13 01:06:16.072526 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:06:16.073331 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:06:16.082891 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:06:16.087923 kernel: audit: type=1130 audit(1755047176.083:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.084384 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:06:16.092337 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:06:16.093239 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:06:16.094880 systemd[1]: Stopped target timers.target. Aug 13 01:06:16.096439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:06:16.102281 kernel: audit: type=1131 audit(1755047176.097:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.096530 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:06:16.098043 systemd[1]: Stopped target initrd.target. Aug 13 01:06:16.102406 systemd[1]: Stopped target basic.target. Aug 13 01:06:16.103923 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:06:16.105497 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:06:16.107037 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:06:16.108777 systemd[1]: Stopped target remote-fs.target. Aug 13 01:06:16.110344 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:06:16.112043 systemd[1]: Stopped target sysinit.target. Aug 13 01:06:16.113539 systemd[1]: Stopped target local-fs.target. Aug 13 01:06:16.115084 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:06:16.116627 systemd[1]: Stopped target swap.target. Aug 13 01:06:16.123805 kernel: audit: type=1131 audit(1755047176.119:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.118060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:06:16.118157 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:06:16.129981 kernel: audit: type=1131 audit(1755047176.125:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.119738 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:06:16.123844 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:06:16.123935 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:06:16.125699 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:06:16.125787 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:06:16.130076 systemd[1]: Stopped target paths.target. Aug 13 01:06:16.130168 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:06:16.133618 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:06:16.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.135339 systemd[1]: Stopped target slices.target. Aug 13 01:06:16.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.136934 systemd[1]: Stopped target sockets.target. Aug 13 01:06:16.145018 iscsid[724]: iscsid shutting down. Aug 13 01:06:16.138451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:06:16.138543 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:06:16.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.140485 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:06:16.152759 ignition[874]: INFO : Ignition 2.14.0 Aug 13 01:06:16.152759 ignition[874]: INFO : Stage: umount Aug 13 01:06:16.152759 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:06:16.152759 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:06:16.152759 ignition[874]: INFO : umount: umount passed Aug 13 01:06:16.152759 ignition[874]: INFO : Ignition finished successfully Aug 13 01:06:16.140572 systemd[1]: Stopped ignition-files.service. Aug 13 01:06:16.142749 systemd[1]: Stopping ignition-mount.service... Aug 13 01:06:16.143800 systemd[1]: Stopping iscsid.service... Aug 13 01:06:16.145938 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:06:16.147734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:06:16.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.147896 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:06:16.149608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:06:16.149736 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:06:16.156969 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 01:06:16.158805 systemd[1]: Stopped iscsid.service. Aug 13 01:06:16.170232 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:06:16.171780 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:06:16.173019 systemd[1]: Stopped ignition-mount.service. Aug 13 01:06:16.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.175231 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:06:16.176480 systemd[1]: Closed iscsid.socket. Aug 13 01:06:16.177975 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:06:16.179000 systemd[1]: Stopped ignition-disks.service. Aug 13 01:06:16.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.180810 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:06:16.180845 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:06:16.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.183503 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:06:16.183558 systemd[1]: Stopped ignition-setup.service. Aug 13 01:06:16.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.184620 systemd[1]: Stopping iscsiuio.service... Aug 13 01:06:16.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.187201 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:06:16.187275 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:06:16.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.188997 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 01:06:16.189068 systemd[1]: Stopped iscsiuio.service. Aug 13 01:06:16.191548 systemd[1]: Stopped target network.target. Aug 13 01:06:16.192658 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:06:16.192700 systemd[1]: Closed iscsiuio.socket. Aug 13 01:06:16.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.193535 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:06:16.195259 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:06:16.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.196624 systemd-networkd[717]: eth0: DHCPv6 lease lost Aug 13 01:06:16.205000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:06:16.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.197853 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:06:16.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.197927 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:06:16.200786 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:06:16.200858 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:06:16.202506 systemd[1]: Stopping network-cleanup.service... Aug 13 01:06:16.203546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:06:16.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.203599 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:06:16.205717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:06:16.220000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:06:16.205751 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:06:16.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.208034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:06:16.208070 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:06:16.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.209800 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:06:16.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.214480 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:06:16.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.214969 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:06:16.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.215044 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:06:16.220893 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:06:16.221019 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:06:16.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:16.223040 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:06:16.223118 systemd[1]: Stopped network-cleanup.service. Aug 13 01:06:16.225495 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:06:16.225535 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:06:16.227184 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:06:16.227226 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:06:16.227315 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:06:16.227351 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:06:16.250000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:06:16.250000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:06:16.250000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:06:16.227537 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:06:16.227568 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:06:16.251000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:06:16.251000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:06:16.227952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:06:16.227988 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:06:16.228786 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:06:16.229086 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:06:16.229123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 01:06:16.231974 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:06:16.232010 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:06:16.233447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:06:16.233480 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:06:16.235074 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:06:16.235436 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:06:16.235512 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:06:16.236512 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:06:16.236574 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:06:16.238398 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:06:16.269324 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Aug 13 01:06:16.239977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:06:16.240012 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:06:16.242246 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:06:16.248771 systemd[1]: Switching root. Aug 13 01:06:16.272552 systemd-journald[199]: Journal stopped Aug 13 01:06:19.535786 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:06:19.535839 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:06:19.535857 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:06:19.535867 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:06:19.535877 kernel: SELinux: policy capability open_perms=1 Aug 13 01:06:19.535894 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:06:19.535905 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:06:19.535919 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:06:19.535929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:06:19.535938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:06:19.535951 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:06:19.535962 systemd[1]: Successfully loaded SELinux policy in 42.799ms. Aug 13 01:06:19.535980 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.891ms. Aug 13 01:06:19.535993 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:06:19.536009 systemd[1]: Detected virtualization kvm. Aug 13 01:06:19.536021 systemd[1]: Detected architecture x86-64. Aug 13 01:06:19.536032 systemd[1]: Detected first boot. Aug 13 01:06:19.536049 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:06:19.536067 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:06:19.536081 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:06:19.536100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:06:19.536112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:06:19.536127 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:06:19.536144 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:06:19.536158 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 01:06:19.536172 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:06:19.536195 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:06:19.536206 systemd[1]: Created slice system-getty.slice. Aug 13 01:06:19.536217 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:06:19.536228 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:06:19.536239 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:06:19.536250 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:06:19.536261 systemd[1]: Created slice user.slice. Aug 13 01:06:19.536277 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:06:19.536288 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:06:19.536303 systemd[1]: Set up automount boot.automount. Aug 13 01:06:19.536317 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:06:19.536327 systemd[1]: Reached target integritysetup.target. Aug 13 01:06:19.536338 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:06:19.536348 systemd[1]: Reached target remote-fs.target. Aug 13 01:06:19.536359 systemd[1]: Reached target slices.target. Aug 13 01:06:19.536370 systemd[1]: Reached target swap.target. Aug 13 01:06:19.536380 systemd[1]: Reached target torcx.target. Aug 13 01:06:19.536399 systemd[1]: Reached target veritysetup.target. Aug 13 01:06:19.536410 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:06:19.536420 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:06:19.536431 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:06:19.536443 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:06:19.536453 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:06:19.536463 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:06:19.536474 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:06:19.536484 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:06:19.536494 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:06:19.536511 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:06:19.536522 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:06:19.536533 systemd[1]: Mounting media.mount... Aug 13 01:06:19.536543 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:19.536553 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:06:19.536564 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:06:19.536608 systemd[1]: Mounting tmp.mount... Aug 13 01:06:19.536620 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:06:19.536631 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:06:19.536652 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:06:19.536662 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:06:19.536673 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:06:19.536683 systemd[1]: Starting modprobe@drm.service... Aug 13 01:06:19.536694 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:06:19.536704 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:06:19.536716 systemd[1]: Starting modprobe@loop.service... Aug 13 01:06:19.536727 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:06:19.536737 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 01:06:19.536753 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 01:06:19.536763 systemd[1]: Starting systemd-journald.service... Aug 13 01:06:19.536774 kernel: fuse: init (API version 7.34) Aug 13 01:06:19.536784 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:06:19.536795 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:06:19.536805 kernel: loop: module loaded Aug 13 01:06:19.536815 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:06:19.536826 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:06:19.536836 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:19.536852 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:06:19.536862 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:06:19.536872 systemd[1]: Mounted media.mount. Aug 13 01:06:19.536882 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:06:19.536892 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:06:19.536903 systemd[1]: Mounted tmp.mount. Aug 13 01:06:19.536917 systemd-journald[1015]: Journal started Aug 13 01:06:19.536955 systemd-journald[1015]: Runtime Journal (/run/log/journal/29145594ad34418b9b01eace20374330) is 6.0M, max 48.5M, 42.5M free. Aug 13 01:06:19.444000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:06:19.444000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 01:06:19.534000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:06:19.534000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc9ee03ab0 a2=4000 a3=7ffc9ee03b4c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:19.534000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:06:19.539103 systemd[1]: Started systemd-journald.service. Aug 13 01:06:19.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.540254 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:06:19.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.541360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:06:19.541528 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:06:19.542644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:06:19.542782 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:06:19.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.543839 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:06:19.543990 systemd[1]: Finished modprobe@drm.service. Aug 13 01:06:19.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.545157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:06:19.545322 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:06:19.546551 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:06:19.547664 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:06:19.547816 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:06:19.548975 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:06:19.549116 systemd[1]: Finished modprobe@loop.service. Aug 13 01:06:19.550235 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:06:19.551425 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:06:19.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.552831 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:06:19.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.554023 systemd[1]: Reached target network-pre.target. Aug 13 01:06:19.556116 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:06:19.558096 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:06:19.559058 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:06:19.560997 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:06:19.564173 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:06:19.565220 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:06:19.566233 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:06:19.567291 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:06:19.568525 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:06:19.571781 systemd-journald[1015]: Time spent on flushing to /var/log/journal/29145594ad34418b9b01eace20374330 is 24.945ms for 1040 entries. Aug 13 01:06:19.571781 systemd-journald[1015]: System Journal (/var/log/journal/29145594ad34418b9b01eace20374330) is 8.0M, max 195.6M, 187.6M free. Aug 13 01:06:19.614393 systemd-journald[1015]: Received client request to flush runtime journal. Aug 13 01:06:19.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.571003 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:06:19.576435 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:06:19.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:19.616428 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:06:19.577617 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:06:19.584425 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:06:19.585770 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:06:19.586734 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:06:19.588863 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:06:19.590650 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:06:19.600167 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:06:19.602092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:06:19.615232 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:06:19.623407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:06:19.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.127654 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:06:20.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.130468 systemd[1]: Starting systemd-udevd.service... Aug 13 01:06:20.147398 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Aug 13 01:06:20.160743 systemd[1]: Started systemd-udevd.service. Aug 13 01:06:20.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.165200 systemd[1]: Starting systemd-networkd.service... Aug 13 01:06:20.173960 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:06:20.191429 systemd[1]: Found device dev-ttyS0.device. Aug 13 01:06:20.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.212511 systemd[1]: Started systemd-userdbd.service. Aug 13 01:06:20.236592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:06:20.248616 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:06:20.252605 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:06:20.252957 systemd-networkd[1079]: lo: Link UP Aug 13 01:06:20.253244 systemd-networkd[1079]: lo: Gained carrier Aug 13 01:06:20.253709 systemd-networkd[1079]: Enumeration completed Aug 13 01:06:20.253919 systemd[1]: Started systemd-networkd.service. Aug 13 01:06:20.254160 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:06:20.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.255568 systemd-networkd[1079]: eth0: Link UP Aug 13 01:06:20.255682 systemd-networkd[1079]: eth0: Gained carrier Aug 13 01:06:20.265720 systemd-networkd[1079]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 01:06:20.267000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:06:20.267000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560d7df21cc0 a1=338ac a2=7f4e352f4bc5 a3=5 items=110 ppid=1067 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:20.267000 audit: CWD cwd="/" Aug 13 01:06:20.267000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=1 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=2 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=3 name=(null) inode=15367 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=4 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=5 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=6 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=7 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=8 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=9 name=(null) inode=15370 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=10 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=11 name=(null) inode=15371 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=12 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=13 name=(null) inode=15372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=14 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=15 name=(null) inode=15373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=16 name=(null) inode=15369 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=17 name=(null) inode=15374 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=18 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=19 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=20 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=21 name=(null) inode=15376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=22 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=23 name=(null) inode=15377 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=24 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=25 name=(null) inode=15378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=26 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=27 name=(null) inode=15379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=28 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=29 name=(null) inode=15380 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=30 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=31 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=32 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=33 name=(null) inode=15382 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=34 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=35 name=(null) inode=15383 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=36 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=37 name=(null) inode=15384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=38 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=39 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=40 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=41 name=(null) inode=15386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=42 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=43 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=44 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=45 name=(null) inode=15388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=46 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=47 name=(null) inode=15389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=48 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=49 name=(null) inode=15390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=50 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=51 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=52 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=53 name=(null) inode=15392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=55 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=56 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=57 name=(null) inode=15394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=58 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=59 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=60 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=61 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=62 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=63 name=(null) inode=15397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=64 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=65 name=(null) inode=15398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=66 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=67 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=68 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=69 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=70 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=71 name=(null) inode=15401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=72 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=73 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=74 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=75 name=(null) inode=15403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=76 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=77 name=(null) inode=15404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=78 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=79 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=80 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=81 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=82 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=83 name=(null) inode=15407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=84 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=85 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=86 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=87 name=(null) inode=15409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=88 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=89 name=(null) inode=15410 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=90 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=91 name=(null) inode=15411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=92 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=93 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=94 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=95 name=(null) inode=15413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=96 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=97 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=98 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=99 name=(null) inode=15415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=100 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=101 name=(null) inode=15416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=102 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=103 name=(null) inode=15417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=104 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=105 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=106 name=(null) inode=15414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=107 name=(null) inode=15419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PATH item=109 name=(null) inode=13232 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:06:20.267000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:06:20.325604 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:06:20.337605 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:06:20.351621 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:06:20.354169 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 01:06:20.354300 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:06:20.370873 kernel: kvm: Nested Virtualization enabled Aug 13 01:06:20.370963 kernel: SVM: kvm: Nested Paging enabled Aug 13 01:06:20.370979 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 01:06:20.372072 kernel: SVM: Virtual GIF supported Aug 13 01:06:20.389615 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:06:20.412054 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:06:20.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.414275 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:06:20.421974 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:06:20.450379 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:06:20.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.451605 systemd[1]: Reached target cryptsetup.target. Aug 13 01:06:20.454124 systemd[1]: Starting lvm2-activation.service... Aug 13 01:06:20.457401 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:06:20.489776 systemd[1]: Finished lvm2-activation.service. Aug 13 01:06:20.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.490808 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:06:20.491699 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:06:20.491722 systemd[1]: Reached target local-fs.target. Aug 13 01:06:20.492521 systemd[1]: Reached target machines.target. Aug 13 01:06:20.494519 systemd[1]: Starting ldconfig.service... Aug 13 01:06:20.495677 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:06:20.495742 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:20.496921 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:06:20.498847 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:06:20.501002 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:06:20.503021 systemd[1]: Starting systemd-sysext.service... Aug 13 01:06:20.504337 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Aug 13 01:06:20.505294 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:06:20.507911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:06:20.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.517232 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:06:20.524017 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:06:20.524282 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:06:20.534612 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 01:06:20.549170 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) Aug 13 01:06:20.549170 systemd-fsck[1119]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 01:06:20.551084 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:06:20.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.554904 systemd[1]: Mounting boot.mount... Aug 13 01:06:20.561306 systemd[1]: Mounted boot.mount. Aug 13 01:06:20.753097 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:06:20.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.757340 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:06:20.758255 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:06:20.760124 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:06:20.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.778614 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:06:20.783624 (sd-sysext)[1131]: Using extensions 'kubernetes'. Aug 13 01:06:20.784009 (sd-sysext)[1131]: Merged extensions into '/usr'. Aug 13 01:06:20.799040 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:20.802485 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:06:20.803443 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:06:20.804864 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:06:20.807184 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:06:20.809315 systemd[1]: Starting modprobe@loop.service... Aug 13 01:06:20.810226 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:06:20.810328 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:20.810421 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:20.812954 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:06:20.814566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:06:20.814823 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:06:20.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.816191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:06:20.816312 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:06:20.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.817671 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:06:20.817871 systemd[1]: Finished modprobe@loop.service. Aug 13 01:06:20.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.819137 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:06:20.819226 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:06:20.820074 systemd[1]: Finished systemd-sysext.service. Aug 13 01:06:20.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:20.822371 systemd[1]: Starting ensure-sysext.service... Aug 13 01:06:20.824240 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:06:20.829234 systemd[1]: Reloading. Aug 13 01:06:20.833459 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:06:20.835750 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:06:20.836476 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:06:20.838289 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:06:20.893411 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-08-13T01:06:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:06:20.893852 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-08-13T01:06:20Z" level=info msg="torcx already run" Aug 13 01:06:21.025617 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:06:21.025634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:06:21.044863 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:06:21.095213 systemd[1]: Finished ldconfig.service. Aug 13 01:06:21.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.097621 kernel: kauditd_printk_skb: 209 callbacks suppressed Aug 13 01:06:21.097737 kernel: audit: type=1130 audit(1755047181.095:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.097427 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:06:21.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.102679 systemd[1]: Starting audit-rules.service... Aug 13 01:06:21.104598 kernel: audit: type=1130 audit(1755047181.100:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.105312 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:06:21.107189 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:06:21.109457 systemd[1]: Starting systemd-resolved.service... Aug 13 01:06:21.111502 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:06:21.113248 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:06:21.114766 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:06:21.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.118164 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:06:21.118000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.122358 kernel: audit: type=1130 audit(1755047181.115:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.122407 kernel: audit: type=1127 audit(1755047181.118:135): pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.122593 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.122904 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.124712 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:06:21.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.126795 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:06:21.128812 systemd[1]: Starting modprobe@loop.service... Aug 13 01:06:21.129752 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.129941 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:21.130114 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:06:21.130232 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.131543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:06:21.131763 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:06:21.133145 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:06:21.133270 systemd[1]: Finished modprobe@loop.service. Aug 13 01:06:21.140880 kernel: audit: type=1130 audit(1755047181.132:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.140942 kernel: audit: type=1131 audit(1755047181.132:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.140959 kernel: audit: type=1130 audit(1755047181.139:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.140645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:06:21.140801 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:06:21.147015 kernel: audit: type=1131 audit(1755047181.139:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.147073 kernel: audit: type=1130 audit(1755047181.146:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.147240 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:06:21.147376 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.148672 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:06:21.153608 kernel: audit: type=1131 audit(1755047181.146:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.154779 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:06:21.158460 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.158994 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.160577 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:06:21.163129 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:06:21.165107 systemd[1]: Starting modprobe@loop.service... Aug 13 01:06:21.166161 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.166263 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:21.167842 systemd[1]: Starting systemd-update-done.service... Aug 13 01:06:21.168897 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:06:21.169008 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.170002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:06:21.170228 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:06:21.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:21.171000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:06:21.171000 audit[1249]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe02c88bd0 a2=420 a3=0 items=0 ppid=1216 pid=1249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:21.171000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:06:21.171933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:06:21.173293 augenrules[1249]: No rules Aug 13 01:06:21.172079 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:06:21.173895 systemd[1]: Finished audit-rules.service. Aug 13 01:06:21.175134 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:06:21.175399 systemd[1]: Finished modprobe@loop.service. Aug 13 01:06:21.177553 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:06:21.177666 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.180134 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.180347 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.182233 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:06:21.184342 systemd[1]: Starting modprobe@drm.service... Aug 13 01:06:21.186320 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:06:21.188317 systemd[1]: Starting modprobe@loop.service... Aug 13 01:06:21.189556 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.189733 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:21.191401 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:06:21.192641 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:06:21.192758 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:06:21.193966 systemd[1]: Finished systemd-update-done.service. Aug 13 01:06:21.196004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:06:21.196141 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:06:21.197567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:06:21.197721 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:06:21.199215 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:06:21.199345 systemd[1]: Finished modprobe@drm.service. Aug 13 01:06:21.200755 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:06:21.201012 systemd[1]: Finished modprobe@loop.service. Aug 13 01:06:21.202508 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:06:21.202624 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.204703 systemd[1]: Finished ensure-sysext.service. Aug 13 01:06:21.205751 systemd-resolved[1221]: Positive Trust Anchors: Aug 13 01:06:21.206159 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:06:21.206255 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:06:21.213228 systemd-resolved[1221]: Defaulting to hostname 'linux'. Aug 13 01:06:21.214717 systemd[1]: Started systemd-resolved.service. Aug 13 01:06:21.215707 systemd[1]: Reached target network.target. Aug 13 01:06:21.216496 systemd[1]: Reached target nss-lookup.target. Aug 13 01:06:21.218316 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:06:21.219490 systemd[1]: Reached target sysinit.target. Aug 13 01:06:21.219617 systemd-timesyncd[1224]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 01:06:21.219683 systemd-timesyncd[1224]: Initial clock synchronization to Wed 2025-08-13 01:06:21.613438 UTC. Aug 13 01:06:21.220391 systemd[1]: Started motdgen.path. Aug 13 01:06:21.221193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:06:21.222335 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:06:21.223224 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:06:21.223249 systemd[1]: Reached target paths.target. Aug 13 01:06:21.224046 systemd[1]: Reached target time-set.target. Aug 13 01:06:21.224995 systemd[1]: Started logrotate.timer. Aug 13 01:06:21.225849 systemd[1]: Started mdadm.timer. Aug 13 01:06:21.226557 systemd[1]: Reached target timers.target. Aug 13 01:06:21.227648 systemd[1]: Listening on dbus.socket. Aug 13 01:06:21.229541 systemd[1]: Starting docker.socket... Aug 13 01:06:21.231299 systemd[1]: Listening on sshd.socket. Aug 13 01:06:21.232552 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:21.232899 systemd[1]: Listening on docker.socket. Aug 13 01:06:21.234767 systemd[1]: Reached target sockets.target. Aug 13 01:06:21.235573 systemd[1]: Reached target basic.target. Aug 13 01:06:21.236477 systemd[1]: System is tainted: cgroupsv1 Aug 13 01:06:21.236518 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.236545 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:06:21.237540 systemd[1]: Starting containerd.service... Aug 13 01:06:21.239316 systemd[1]: Starting dbus.service... Aug 13 01:06:21.241122 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:06:21.242916 systemd[1]: Starting extend-filesystems.service... Aug 13 01:06:21.243986 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:06:21.244979 jq[1279]: false Aug 13 01:06:21.245009 systemd[1]: Starting motdgen.service... Aug 13 01:06:21.246825 systemd[1]: Starting prepare-helm.service... Aug 13 01:06:21.248672 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:06:21.250665 systemd[1]: Starting sshd-keygen.service... Aug 13 01:06:21.253228 systemd[1]: Starting systemd-logind.service... Aug 13 01:06:21.254066 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:06:21.254119 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:06:21.255088 systemd[1]: Starting update-engine.service... Aug 13 01:06:21.258118 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:06:21.261091 jq[1295]: true Aug 13 01:06:21.261263 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:06:21.261546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:06:21.261838 dbus-daemon[1278]: [system] SELinux support is enabled Aug 13 01:06:21.262597 systemd[1]: Started dbus.service. Aug 13 01:06:21.266117 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:06:21.268055 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:06:21.270837 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:06:21.272897 tar[1298]: linux-amd64/helm Aug 13 01:06:21.270873 systemd[1]: Reached target system-config.target. Aug 13 01:06:21.271923 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:06:21.271957 systemd[1]: Reached target user-config.target. Aug 13 01:06:21.274571 jq[1304]: true Aug 13 01:06:21.275391 extend-filesystems[1280]: Found loop1 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found sr0 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda1 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda2 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda3 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found usr Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda4 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda6 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda7 Aug 13 01:06:21.279771 extend-filesystems[1280]: Found vda9 Aug 13 01:06:21.279771 extend-filesystems[1280]: Checking size of /dev/vda9 Aug 13 01:06:21.297050 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:06:21.297441 systemd[1]: Finished motdgen.service. Aug 13 01:06:21.305530 update_engine[1291]: I0813 01:06:21.305366 1291 main.cc:92] Flatcar Update Engine starting Aug 13 01:06:21.308382 extend-filesystems[1280]: Resized partition /dev/vda9 Aug 13 01:06:21.312349 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 01:06:21.314559 update_engine[1291]: I0813 01:06:21.314500 1291 update_check_scheduler.cc:74] Next update check in 7m36s Aug 13 01:06:21.314777 systemd[1]: Started update-engine.service. Aug 13 01:06:21.317925 systemd[1]: Started locksmithd.service. Aug 13 01:06:21.319605 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 01:06:21.338025 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:06:21.338046 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:06:21.338812 systemd-logind[1290]: New seat seat0. Aug 13 01:06:21.340662 systemd[1]: Started systemd-logind.service. Aug 13 01:06:21.349626 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 01:06:21.417833 extend-filesystems[1330]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 01:06:21.417833 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:06:21.417833 extend-filesystems[1330]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 01:06:21.422208 env[1307]: time="2025-08-13T01:06:21.418284250Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:06:21.423884 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Aug 13 01:06:21.423039 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:06:21.427287 bash[1328]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:06:21.423274 systemd[1]: Finished extend-filesystems.service. Aug 13 01:06:21.425995 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:06:21.440931 env[1307]: time="2025-08-13T01:06:21.440887000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:06:21.441091 env[1307]: time="2025-08-13T01:06:21.441061337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442232 env[1307]: time="2025-08-13T01:06:21.442161910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442232 env[1307]: time="2025-08-13T01:06:21.442188430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442481 env[1307]: time="2025-08-13T01:06:21.442447957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442481 env[1307]: time="2025-08-13T01:06:21.442470719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442545 env[1307]: time="2025-08-13T01:06:21.442482912Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:06:21.442545 env[1307]: time="2025-08-13T01:06:21.442492149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442621 env[1307]: time="2025-08-13T01:06:21.442604440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.442865 env[1307]: time="2025-08-13T01:06:21.442838769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:06:21.443007 env[1307]: time="2025-08-13T01:06:21.442982569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:06:21.443007 env[1307]: time="2025-08-13T01:06:21.443000863Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:06:21.443067 env[1307]: time="2025-08-13T01:06:21.443052180Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:06:21.443100 env[1307]: time="2025-08-13T01:06:21.443067318Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:06:21.450001 env[1307]: time="2025-08-13T01:06:21.449973527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:06:21.450001 env[1307]: time="2025-08-13T01:06:21.450001209Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:06:21.450077 env[1307]: time="2025-08-13T01:06:21.450014213Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:06:21.450077 env[1307]: time="2025-08-13T01:06:21.450056693Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450077 env[1307]: time="2025-08-13T01:06:21.450069176Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450135 env[1307]: time="2025-08-13T01:06:21.450081079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450135 env[1307]: time="2025-08-13T01:06:21.450092190Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450175 env[1307]: time="2025-08-13T01:06:21.450153845Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450175 env[1307]: time="2025-08-13T01:06:21.450169655Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450218 env[1307]: time="2025-08-13T01:06:21.450181848Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450218 env[1307]: time="2025-08-13T01:06:21.450193159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.450218 env[1307]: time="2025-08-13T01:06:21.450207315Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:06:21.450348 env[1307]: time="2025-08-13T01:06:21.450323804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:06:21.450433 env[1307]: time="2025-08-13T01:06:21.450410206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:06:21.450874 env[1307]: time="2025-08-13T01:06:21.450849009Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:06:21.451006 env[1307]: time="2025-08-13T01:06:21.450897119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451006 env[1307]: time="2025-08-13T01:06:21.450910424Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:06:21.451006 env[1307]: time="2025-08-13T01:06:21.450997257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451072 env[1307]: time="2025-08-13T01:06:21.451011634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451094 env[1307]: time="2025-08-13T01:06:21.451085813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451115 env[1307]: time="2025-08-13T01:06:21.451100410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451115 env[1307]: time="2025-08-13T01:06:21.451112433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451159 env[1307]: time="2025-08-13T01:06:21.451123603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451159 env[1307]: time="2025-08-13T01:06:21.451133913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451159 env[1307]: time="2025-08-13T01:06:21.451143511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451159 env[1307]: time="2025-08-13T01:06:21.451155373Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:06:21.451299 env[1307]: time="2025-08-13T01:06:21.451275789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451299 env[1307]: time="2025-08-13T01:06:21.451295736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451365 env[1307]: time="2025-08-13T01:06:21.451307729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451365 env[1307]: time="2025-08-13T01:06:21.451319371Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:06:21.451365 env[1307]: time="2025-08-13T01:06:21.451331994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:06:21.451365 env[1307]: time="2025-08-13T01:06:21.451341021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:06:21.451449 env[1307]: time="2025-08-13T01:06:21.451373392Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:06:21.451449 env[1307]: time="2025-08-13T01:06:21.451418627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:06:21.451685 env[1307]: time="2025-08-13T01:06:21.451626577Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:06:21.452548 env[1307]: time="2025-08-13T01:06:21.451687210Z" level=info msg="Connect containerd service" Aug 13 01:06:21.452548 env[1307]: time="2025-08-13T01:06:21.451720382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:06:21.452548 env[1307]: time="2025-08-13T01:06:21.452356425Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:06:21.452548 env[1307]: time="2025-08-13T01:06:21.452468655Z" level=info msg="Start subscribing containerd event" Aug 13 01:06:21.452548 env[1307]: time="2025-08-13T01:06:21.452531754Z" level=info msg="Start recovering state" Aug 13 01:06:21.452757 env[1307]: time="2025-08-13T01:06:21.452733252Z" level=info msg="Start event monitor" Aug 13 01:06:21.452757 env[1307]: time="2025-08-13T01:06:21.452758579Z" level=info msg="Start snapshots syncer" Aug 13 01:06:21.452820 env[1307]: time="2025-08-13T01:06:21.452767245Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:06:21.452820 env[1307]: time="2025-08-13T01:06:21.452774098Z" level=info msg="Start streaming server" Aug 13 01:06:21.453036 locksmithd[1332]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:06:21.453258 env[1307]: time="2025-08-13T01:06:21.453160282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:06:21.453258 env[1307]: time="2025-08-13T01:06:21.453200207Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:06:21.453373 systemd[1]: Started containerd.service. Aug 13 01:06:21.460109 env[1307]: time="2025-08-13T01:06:21.460086288Z" level=info msg="containerd successfully booted in 0.053824s" Aug 13 01:06:21.778365 tar[1298]: linux-amd64/LICENSE Aug 13 01:06:21.778497 tar[1298]: linux-amd64/README.md Aug 13 01:06:21.782822 systemd[1]: Finished prepare-helm.service. Aug 13 01:06:21.817718 systemd-networkd[1079]: eth0: Gained IPv6LL Aug 13 01:06:21.819262 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:06:21.820542 systemd[1]: Reached target network-online.target. Aug 13 01:06:21.822965 systemd[1]: Starting kubelet.service... Aug 13 01:06:22.369565 sshd_keygen[1312]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:06:22.390040 systemd[1]: Finished sshd-keygen.service. Aug 13 01:06:22.392649 systemd[1]: Starting issuegen.service... Aug 13 01:06:22.397752 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:06:22.397972 systemd[1]: Finished issuegen.service. Aug 13 01:06:22.400408 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:06:22.406833 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:06:22.409277 systemd[1]: Started getty@tty1.service. Aug 13 01:06:22.411286 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:06:22.412405 systemd[1]: Reached target getty.target. Aug 13 01:06:23.045649 systemd[1]: Started kubelet.service. Aug 13 01:06:23.047707 systemd[1]: Reached target multi-user.target. Aug 13 01:06:23.050330 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:06:23.057669 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:06:23.057898 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:06:23.060727 systemd[1]: Startup finished in 6.210s (kernel) + 6.749s (userspace) = 12.960s. Aug 13 01:06:23.619866 kubelet[1380]: E0813 01:06:23.619787 1380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:06:23.621645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:06:23.621814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:06:24.309820 systemd[1]: Created slice system-sshd.slice. Aug 13 01:06:24.311217 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:49678.service. Aug 13 01:06:24.354424 sshd[1390]: Accepted publickey for core from 10.0.0.1 port 49678 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.355969 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.363763 systemd[1]: Created slice user-500.slice. Aug 13 01:06:24.364810 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:06:24.366370 systemd-logind[1290]: New session 1 of user core. Aug 13 01:06:24.374758 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:06:24.376298 systemd[1]: Starting user@500.service... Aug 13 01:06:24.379119 (systemd)[1395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.449927 systemd[1395]: Queued start job for default target default.target. Aug 13 01:06:24.450164 systemd[1395]: Reached target paths.target. Aug 13 01:06:24.450179 systemd[1395]: Reached target sockets.target. Aug 13 01:06:24.450191 systemd[1395]: Reached target timers.target. Aug 13 01:06:24.450201 systemd[1395]: Reached target basic.target. Aug 13 01:06:24.450239 systemd[1395]: Reached target default.target. Aug 13 01:06:24.450260 systemd[1395]: Startup finished in 65ms. Aug 13 01:06:24.450369 systemd[1]: Started user@500.service. Aug 13 01:06:24.451380 systemd[1]: Started session-1.scope. Aug 13 01:06:24.503288 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:49680.service. Aug 13 01:06:24.541301 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.542471 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.546081 systemd-logind[1290]: New session 2 of user core. Aug 13 01:06:24.546875 systemd[1]: Started session-2.scope. Aug 13 01:06:24.601544 sshd[1404]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:24.603964 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:49694.service. Aug 13 01:06:24.604768 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:49680.service: Deactivated successfully. Aug 13 01:06:24.605524 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:06:24.605539 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:06:24.606295 systemd-logind[1290]: Removed session 2. Aug 13 01:06:24.640641 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 49694 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.641590 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.645094 systemd-logind[1290]: New session 3 of user core. Aug 13 01:06:24.645828 systemd[1]: Started session-3.scope. Aug 13 01:06:24.695926 sshd[1409]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:24.698684 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:49700.service. Aug 13 01:06:24.699144 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:49694.service: Deactivated successfully. Aug 13 01:06:24.700610 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:06:24.700613 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:06:24.701580 systemd-logind[1290]: Removed session 3. Aug 13 01:06:24.737706 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 49700 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.738682 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.741999 systemd-logind[1290]: New session 4 of user core. Aug 13 01:06:24.742788 systemd[1]: Started session-4.scope. Aug 13 01:06:24.797315 sshd[1417]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:24.800553 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:49702.service. Aug 13 01:06:24.801088 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:49700.service: Deactivated successfully. Aug 13 01:06:24.802588 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:06:24.802714 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:06:24.803555 systemd-logind[1290]: Removed session 4. Aug 13 01:06:24.837086 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.838220 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.841649 systemd-logind[1290]: New session 5 of user core. Aug 13 01:06:24.842316 systemd[1]: Started session-5.scope. Aug 13 01:06:24.900773 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:06:24.900973 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:06:24.907850 dbus-daemon[1278]: \xd0\u000d\x96\xb0YU: received setenforce notice (enforcing=-2092639568) Aug 13 01:06:24.909701 sudo[1429]: pam_unix(sudo:session): session closed for user root Aug 13 01:06:24.911143 sshd[1424]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:24.914302 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:49714.service. Aug 13 01:06:24.914980 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:49702.service: Deactivated successfully. Aug 13 01:06:24.916504 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:06:24.916549 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:06:24.917742 systemd-logind[1290]: Removed session 5. Aug 13 01:06:24.952089 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:24.953063 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:24.956196 systemd-logind[1290]: New session 6 of user core. Aug 13 01:06:24.956913 systemd[1]: Started session-6.scope. Aug 13 01:06:25.009970 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:06:25.010161 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:06:25.012604 sudo[1438]: pam_unix(sudo:session): session closed for user root Aug 13 01:06:25.016431 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 01:06:25.016607 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:06:25.024180 systemd[1]: Stopping audit-rules.service... Aug 13 01:06:25.024000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 01:06:25.024000 audit[1441]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3e8e7610 a2=420 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:25.024000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 01:06:25.025520 auditctl[1441]: No rules Aug 13 01:06:25.025688 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:06:25.025853 systemd[1]: Stopped audit-rules.service. Aug 13 01:06:25.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.027139 systemd[1]: Starting audit-rules.service... Aug 13 01:06:25.041746 augenrules[1459]: No rules Aug 13 01:06:25.042238 systemd[1]: Finished audit-rules.service. Aug 13 01:06:25.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.043823 sudo[1437]: pam_unix(sudo:session): session closed for user root Aug 13 01:06:25.043000 audit[1437]: USER_END pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.043000 audit[1437]: CRED_DISP pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.045000 audit[1432]: USER_END pid=1432 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.045000 audit[1432]: CRED_DISP pid=1432 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.045098 sshd[1432]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:25.047490 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:49724.service. Aug 13 01:06:25.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.139:22-10.0.0.1:49724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.048006 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:49714.service: Deactivated successfully. Aug 13 01:06:25.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.139:22-10.0.0.1:49714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.048747 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:06:25.048765 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:06:25.049607 systemd-logind[1290]: Removed session 6. Aug 13 01:06:25.083000 audit[1464]: USER_ACCT pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.084393 sshd[1464]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:06:25.084000 audit[1464]: CRED_ACQ pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.084000 audit[1464]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc71258c30 a2=3 a3=0 items=0 ppid=1 pid=1464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:25.084000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:06:25.085386 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:06:25.088475 systemd-logind[1290]: New session 7 of user core. Aug 13 01:06:25.089215 systemd[1]: Started session-7.scope. Aug 13 01:06:25.091000 audit[1464]: USER_START pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.092000 audit[1469]: CRED_ACQ pid=1469 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:06:25.142000 audit[1470]: USER_ACCT pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.143000 audit[1470]: CRED_REFR pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.143554 sudo[1470]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:06:25.143800 sudo[1470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:06:25.144000 audit[1470]: USER_START pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:06:25.176774 systemd[1]: Starting docker.service... Aug 13 01:06:25.309601 env[1482]: time="2025-08-13T01:06:25.309508821Z" level=info msg="Starting up" Aug 13 01:06:25.310837 env[1482]: time="2025-08-13T01:06:25.310820282Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:06:25.310837 env[1482]: time="2025-08-13T01:06:25.310834098Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:06:25.310912 env[1482]: time="2025-08-13T01:06:25.310851837Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:06:25.310912 env[1482]: time="2025-08-13T01:06:25.310861545Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:06:25.312715 env[1482]: time="2025-08-13T01:06:25.312696090Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:06:25.312715 env[1482]: time="2025-08-13T01:06:25.312712463Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:06:25.312715 env[1482]: time="2025-08-13T01:06:25.312726817Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:06:25.312828 env[1482]: time="2025-08-13T01:06:25.312735045Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:06:25.949126 env[1482]: time="2025-08-13T01:06:25.949070848Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 01:06:25.949126 env[1482]: time="2025-08-13T01:06:25.949098304Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 01:06:25.949391 env[1482]: time="2025-08-13T01:06:25.949304059Z" level=info msg="Loading containers: start." Aug 13 01:06:26.007000 audit[1516]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.007000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4d3fddb0 a2=0 a3=7ffd4d3fdd9c items=0 ppid=1482 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.007000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 01:06:26.009000 audit[1518]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.009000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd3c970950 a2=0 a3=7ffd3c97093c items=0 ppid=1482 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.009000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 01:06:26.011000 audit[1520]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.011000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3dee1a50 a2=0 a3=7fff3dee1a3c items=0 ppid=1482 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 01:06:26.013000 audit[1522]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.013000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffce1d0c90 a2=0 a3=7fffce1d0c7c items=0 ppid=1482 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 01:06:26.016000 audit[1524]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.016000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7d266cf0 a2=0 a3=7ffc7d266cdc items=0 ppid=1482 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.016000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 01:06:26.031000 audit[1529]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.031000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd489a6b00 a2=0 a3=7ffd489a6aec items=0 ppid=1482 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.031000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 01:06:26.158000 audit[1531]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.160175 kernel: kauditd_printk_skb: 46 callbacks suppressed Aug 13 01:06:26.160274 kernel: audit: type=1325 audit(1755047186.158:170): table=filter:8 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.158000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe415f46f0 a2=0 a3=7ffe415f46dc items=0 ppid=1482 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.167130 kernel: audit: type=1300 audit(1755047186.158:170): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe415f46f0 a2=0 a3=7ffe415f46dc items=0 ppid=1482 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.167202 kernel: audit: type=1327 audit(1755047186.158:170): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 01:06:26.158000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 01:06:26.160000 audit[1533]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.171415 kernel: audit: type=1325 audit(1755047186.160:171): table=filter:9 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.171470 kernel: audit: type=1300 audit(1755047186.160:171): arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffce874cec0 a2=0 a3=7ffce874ceac items=0 ppid=1482 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.160000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffce874cec0 a2=0 a3=7ffce874ceac items=0 ppid=1482 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.175966 kernel: audit: type=1327 audit(1755047186.160:171): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 01:06:26.160000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 01:06:26.178103 kernel: audit: type=1325 audit(1755047186.162:172): table=filter:10 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.162000 audit[1535]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.162000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd357642c0 a2=0 a3=7ffd357642ac items=0 ppid=1482 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.185045 kernel: audit: type=1300 audit(1755047186.162:172): arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd357642c0 a2=0 a3=7ffd357642ac items=0 ppid=1482 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.185093 kernel: audit: type=1327 audit(1755047186.162:172): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.162000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.266000 audit[1539]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.266000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe44a3d550 a2=0 a3=7ffe44a3d53c items=0 ppid=1482 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.270640 kernel: audit: type=1325 audit(1755047186.266:173): table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.282000 audit[1540]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.282000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc210c86f0 a2=0 a3=7ffc210c86dc items=0 ppid=1482 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.295628 kernel: Initializing XFRM netlink socket Aug 13 01:06:26.323243 env[1482]: time="2025-08-13T01:06:26.323196073Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:06:26.340000 audit[1548]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.340000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe2cea79a0 a2=0 a3=7ffe2cea798c items=0 ppid=1482 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.340000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 01:06:26.352000 audit[1551]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.352000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffec1c40d0 a2=0 a3=7fffec1c40bc items=0 ppid=1482 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.352000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 01:06:26.354000 audit[1554]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.354000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffcf7af760 a2=0 a3=7fffcf7af74c items=0 ppid=1482 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.354000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 01:06:26.356000 audit[1556]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.356000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc84811f50 a2=0 a3=7ffc84811f3c items=0 ppid=1482 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.356000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 01:06:26.357000 audit[1558]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.357000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffce327b110 a2=0 a3=7ffce327b0fc items=0 ppid=1482 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.357000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 01:06:26.358000 audit[1560]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.358000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe49eafe30 a2=0 a3=7ffe49eafe1c items=0 ppid=1482 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.358000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 01:06:26.360000 audit[1562]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.360000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffde09b8820 a2=0 a3=7ffde09b880c items=0 ppid=1482 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.360000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 01:06:26.367000 audit[1565]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.367000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffad4c5980 a2=0 a3=7fffad4c596c items=0 ppid=1482 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.367000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 01:06:26.369000 audit[1567]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.369000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe87cefae0 a2=0 a3=7ffe87cefacc items=0 ppid=1482 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 01:06:26.372000 audit[1569]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.372000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff02ad1420 a2=0 a3=7fff02ad140c items=0 ppid=1482 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 01:06:26.374000 audit[1571]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.374000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd98da1ca0 a2=0 a3=7ffd98da1c8c items=0 ppid=1482 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.374000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 01:06:26.375211 systemd-networkd[1079]: docker0: Link UP Aug 13 01:06:26.522000 audit[1575]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.522000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd6e252e40 a2=0 a3=7ffd6e252e2c items=0 ppid=1482 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.522000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.526000 audit[1576]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:26.526000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc048f1170 a2=0 a3=7ffc048f115c items=0 ppid=1482 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:26.526000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:06:26.527906 env[1482]: time="2025-08-13T01:06:26.527862392Z" level=info msg="Loading containers: done." Aug 13 01:06:26.578667 env[1482]: time="2025-08-13T01:06:26.578567840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:06:26.578901 env[1482]: time="2025-08-13T01:06:26.578856041Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:06:26.579024 env[1482]: time="2025-08-13T01:06:26.578994538Z" level=info msg="Daemon has completed initialization" Aug 13 01:06:26.605751 systemd[1]: Started docker.service. Aug 13 01:06:26.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:26.612582 env[1482]: time="2025-08-13T01:06:26.612490405Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:06:27.518963 env[1307]: time="2025-08-13T01:06:27.518894414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:06:28.298125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511894576.mount: Deactivated successfully. Aug 13 01:06:31.663844 env[1307]: time="2025-08-13T01:06:31.663729543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:31.666920 env[1307]: time="2025-08-13T01:06:31.666872655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:31.668891 env[1307]: time="2025-08-13T01:06:31.668840937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:31.671137 env[1307]: time="2025-08-13T01:06:31.671110317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:31.671907 env[1307]: time="2025-08-13T01:06:31.671873349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:06:31.673011 env[1307]: time="2025-08-13T01:06:31.672989373Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:06:33.854857 env[1307]: time="2025-08-13T01:06:33.854770400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:33.858563 env[1307]: time="2025-08-13T01:06:33.858340777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:33.860576 env[1307]: time="2025-08-13T01:06:33.860528745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:33.862835 env[1307]: time="2025-08-13T01:06:33.862810935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:33.863688 env[1307]: time="2025-08-13T01:06:33.863650340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:06:33.864465 env[1307]: time="2025-08-13T01:06:33.864414855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:06:33.872949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:06:33.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:33.873196 systemd[1]: Stopped kubelet.service. Aug 13 01:06:33.874166 kernel: kauditd_printk_skb: 45 callbacks suppressed Aug 13 01:06:33.874227 kernel: audit: type=1130 audit(1755047193.871:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:33.875248 systemd[1]: Starting kubelet.service... Aug 13 01:06:33.880676 kernel: audit: type=1131 audit(1755047193.871:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:33.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:34.013481 systemd[1]: Started kubelet.service. Aug 13 01:06:34.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:34.017639 kernel: audit: type=1130 audit(1755047194.013:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:35.006037 kubelet[1622]: E0813 01:06:35.005974 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:06:35.009006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:06:35.009216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:06:35.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:35.013613 kernel: audit: type=1131 audit(1755047195.009:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:39.610749 env[1307]: time="2025-08-13T01:06:39.610678474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:39.616868 env[1307]: time="2025-08-13T01:06:39.616812856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:39.637918 env[1307]: time="2025-08-13T01:06:39.637877264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:39.648052 env[1307]: time="2025-08-13T01:06:39.647984615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:39.648739 env[1307]: time="2025-08-13T01:06:39.648713242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:06:39.649306 env[1307]: time="2025-08-13T01:06:39.649280884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:06:41.322352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577929895.mount: Deactivated successfully. Aug 13 01:06:42.550897 env[1307]: time="2025-08-13T01:06:42.550803663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:42.616417 env[1307]: time="2025-08-13T01:06:42.616338886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:42.654764 env[1307]: time="2025-08-13T01:06:42.654687713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:42.673399 env[1307]: time="2025-08-13T01:06:42.673345063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:42.673849 env[1307]: time="2025-08-13T01:06:42.673825795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:06:42.674448 env[1307]: time="2025-08-13T01:06:42.674429804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:06:45.192042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:06:45.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.192202 systemd[1]: Stopped kubelet.service. Aug 13 01:06:45.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.196259 systemd[1]: Starting kubelet.service... Aug 13 01:06:45.200466 kernel: audit: type=1130 audit(1755047205.191:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.200539 kernel: audit: type=1131 audit(1755047205.191:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.200781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664302966.mount: Deactivated successfully. Aug 13 01:06:45.288795 systemd[1]: Started kubelet.service. Aug 13 01:06:45.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.293617 kernel: audit: type=1130 audit(1755047205.287:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:45.393184 kubelet[1638]: E0813 01:06:45.393100 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:06:45.395493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:06:45.395704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:06:45.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:45.399609 kernel: audit: type=1131 audit(1755047205.394:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:48.952173 env[1307]: time="2025-08-13T01:06:48.952093646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:48.954090 env[1307]: time="2025-08-13T01:06:48.954038487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:48.956106 env[1307]: time="2025-08-13T01:06:48.956054862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:48.957949 env[1307]: time="2025-08-13T01:06:48.957910329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:48.958699 env[1307]: time="2025-08-13T01:06:48.958664653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:06:48.959275 env[1307]: time="2025-08-13T01:06:48.959232463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:06:49.704767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856106946.mount: Deactivated successfully. Aug 13 01:06:49.782127 env[1307]: time="2025-08-13T01:06:49.782042148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:49.784031 env[1307]: time="2025-08-13T01:06:49.784007937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:49.785808 env[1307]: time="2025-08-13T01:06:49.785774164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:49.787295 env[1307]: time="2025-08-13T01:06:49.787261047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:49.787766 env[1307]: time="2025-08-13T01:06:49.787735017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:06:49.788277 env[1307]: time="2025-08-13T01:06:49.788251565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:06:50.393847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637593908.mount: Deactivated successfully. Aug 13 01:06:53.539456 env[1307]: time="2025-08-13T01:06:53.539382869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:53.541458 env[1307]: time="2025-08-13T01:06:53.541390620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:53.543389 env[1307]: time="2025-08-13T01:06:53.543366814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:53.545314 env[1307]: time="2025-08-13T01:06:53.545277847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:53.546158 env[1307]: time="2025-08-13T01:06:53.546115510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:06:55.646831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:06:55.647084 systemd[1]: Stopped kubelet.service. Aug 13 01:06:55.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:55.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:55.648970 systemd[1]: Starting kubelet.service... Aug 13 01:06:55.656177 kernel: audit: type=1130 audit(1755047215.645:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:55.656293 kernel: audit: type=1131 audit(1755047215.645:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:55.815902 systemd[1]: Started kubelet.service. Aug 13 01:06:55.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:55.827520 kernel: audit: type=1130 audit(1755047215.814:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:56.229153 kubelet[1676]: E0813 01:06:56.229068 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:06:56.233266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:06:56.233484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:06:56.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:56.237632 kernel: audit: type=1131 audit(1755047216.232:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:06:56.694135 systemd[1]: Stopped kubelet.service. Aug 13 01:06:56.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:56.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:56.699600 systemd[1]: Starting kubelet.service... Aug 13 01:06:56.704294 kernel: audit: type=1130 audit(1755047216.695:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:56.704397 kernel: audit: type=1131 audit(1755047216.695:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:56.773961 systemd[1]: Reloading. Aug 13 01:06:56.877099 /usr/lib/systemd/system-generators/torcx-generator[1712]: time="2025-08-13T01:06:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:06:56.877140 /usr/lib/systemd/system-generators/torcx-generator[1712]: time="2025-08-13T01:06:56Z" level=info msg="torcx already run" Aug 13 01:06:57.769300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:06:57.769323 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:06:57.791735 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:06:57.878685 systemd[1]: Started kubelet.service. Aug 13 01:06:57.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:57.883613 kernel: audit: type=1130 audit(1755047217.877:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:57.884169 systemd[1]: Stopping kubelet.service... Aug 13 01:06:57.884597 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:06:57.884893 systemd[1]: Stopped kubelet.service. Aug 13 01:06:57.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:57.887521 systemd[1]: Starting kubelet.service... Aug 13 01:06:57.888611 kernel: audit: type=1131 audit(1755047217.883:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:57.985319 systemd[1]: Started kubelet.service. Aug 13 01:06:57.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:57.989650 kernel: audit: type=1130 audit(1755047217.985:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:06:58.036334 kubelet[1774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:06:58.036334 kubelet[1774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:06:58.036334 kubelet[1774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:06:58.036334 kubelet[1774]: I0813 01:06:58.036293 1774 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:06:58.243891 kubelet[1774]: I0813 01:06:58.243841 1774 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:06:58.243891 kubelet[1774]: I0813 01:06:58.243876 1774 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:06:58.244443 kubelet[1774]: I0813 01:06:58.244420 1774 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:06:58.272153 kubelet[1774]: E0813 01:06:58.272104 1774 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:58.277893 kubelet[1774]: I0813 01:06:58.277817 1774 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:06:58.285160 kubelet[1774]: E0813 01:06:58.285112 1774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:06:58.285160 kubelet[1774]: I0813 01:06:58.285160 1774 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:06:58.291111 kubelet[1774]: I0813 01:06:58.291025 1774 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:06:58.292094 kubelet[1774]: I0813 01:06:58.292067 1774 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:06:58.292238 kubelet[1774]: I0813 01:06:58.292202 1774 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:06:58.292408 kubelet[1774]: I0813 01:06:58.292237 1774 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:06:58.292513 kubelet[1774]: I0813 01:06:58.292423 1774 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:06:58.292513 kubelet[1774]: I0813 01:06:58.292432 1774 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:06:58.292569 kubelet[1774]: I0813 01:06:58.292556 1774 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:06:58.294952 kubelet[1774]: I0813 01:06:58.294893 1774 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:06:58.294952 kubelet[1774]: I0813 01:06:58.294916 1774 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:06:58.295052 kubelet[1774]: I0813 01:06:58.294958 1774 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:06:58.295052 kubelet[1774]: I0813 01:06:58.294981 1774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:06:58.305517 kubelet[1774]: W0813 01:06:58.305447 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:58.305517 kubelet[1774]: E0813 01:06:58.305518 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:58.306904 kubelet[1774]: I0813 01:06:58.306863 1774 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:06:58.309425 kubelet[1774]: I0813 01:06:58.309388 1774 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:06:58.309940 kubelet[1774]: W0813 01:06:58.309882 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:58.310089 kubelet[1774]: E0813 01:06:58.310062 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:58.310388 kubelet[1774]: W0813 01:06:58.310336 1774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:06:58.314047 kubelet[1774]: I0813 01:06:58.314006 1774 server.go:1274] "Started kubelet" Aug 13 01:06:58.314173 kubelet[1774]: I0813 01:06:58.314102 1774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:06:58.314456 kubelet[1774]: I0813 01:06:58.314434 1774 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:06:58.315220 kubelet[1774]: I0813 01:06:58.315185 1774 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:06:58.321000 audit[1774]: AVC avc: denied { mac_admin } for pid=1774 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:06:58.322119 kubelet[1774]: I0813 01:06:58.322021 1774 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:06:58.322235 kubelet[1774]: I0813 01:06:58.322207 1774 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 01:06:58.322355 kubelet[1774]: I0813 01:06:58.322332 1774 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 01:06:58.322535 kubelet[1774]: I0813 01:06:58.322518 1774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:06:58.321000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:06:58.321000 audit[1774]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006b6c90 a1=c0009b6ed0 a2=c0006b6c60 a3=25 items=0 ppid=1 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:06:58.321000 audit[1774]: AVC avc: denied { mac_admin } for pid=1774 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:06:58.321000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:06:58.321000 audit[1774]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006aad80 a1=c0009b6ee8 a2=c0006b6d20 a3=25 items=0 ppid=1 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:06:58.324000 audit[1787]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.325649 kernel: audit: type=1400 audit(1755047218.321:206): avc: denied { mac_admin } for pid=1774 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:06:58.324000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdfb9b7710 a2=0 a3=7ffdfb9b76fc items=0 ppid=1774 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 01:06:58.325903 kubelet[1774]: I0813 01:06:58.325873 1774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:06:58.326508 kubelet[1774]: I0813 01:06:58.326212 1774 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:06:58.326508 kubelet[1774]: E0813 01:06:58.326444 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:06:58.326000 audit[1788]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.326000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9c08bd90 a2=0 a3=7ffd9c08bd7c items=0 ppid=1774 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 01:06:58.326993 kubelet[1774]: I0813 01:06:58.326844 1774 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:06:58.326993 kubelet[1774]: I0813 01:06:58.326931 1774 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:06:58.332096 kubelet[1774]: W0813 01:06:58.332050 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:58.332150 kubelet[1774]: E0813 01:06:58.332102 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:58.332178 kubelet[1774]: E0813 01:06:58.332138 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Aug 13 01:06:58.332975 kubelet[1774]: E0813 01:06:58.331299 1774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2e257ba43c48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 01:06:58.313968712 +0000 UTC m=+0.324052449,LastTimestamp:2025-08-13 01:06:58.313968712 +0000 UTC m=+0.324052449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 01:06:58.333129 kubelet[1774]: E0813 01:06:58.333097 1774 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:06:58.333182 kubelet[1774]: I0813 01:06:58.333127 1774 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:06:58.333182 kubelet[1774]: I0813 01:06:58.333179 1774 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:06:58.333300 kubelet[1774]: I0813 01:06:58.333273 1774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:06:58.332000 audit[1790]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.332000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd5941510 a2=0 a3=7ffcd59414fc items=0 ppid=1774 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:06:58.334000 audit[1792]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.334000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc5b40afb0 a2=0 a3=7ffc5b40af9c items=0 ppid=1774 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:06:58.341000 audit[1795]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.341000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffde7bb5320 a2=0 a3=7ffde7bb530c items=0 ppid=1774 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 01:06:58.342406 kubelet[1774]: I0813 01:06:58.342360 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:06:58.342000 audit[1796]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:06:58.342000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffcc0cab50 a2=0 a3=7fffcc0cab3c items=0 ppid=1774 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 01:06:58.343406 kubelet[1774]: I0813 01:06:58.343274 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:06:58.343406 kubelet[1774]: I0813 01:06:58.343305 1774 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:06:58.343406 kubelet[1774]: I0813 01:06:58.343336 1774 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:06:58.343406 kubelet[1774]: E0813 01:06:58.343378 1774 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:06:58.344000 audit[1798]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.344000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd55d3e080 a2=0 a3=7ffd55d3e06c items=0 ppid=1774 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 01:06:58.345000 audit[1799]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.345000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3b7c61f0 a2=0 a3=7fff3b7c61dc items=0 ppid=1774 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 01:06:58.346000 audit[1800]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:06:58.346000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc39379a10 a2=0 a3=7ffc393799fc items=0 ppid=1774 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 01:06:58.347000 audit[1801]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:06:58.347000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd25dedca0 a2=0 a3=7ffd25dedc8c items=0 ppid=1774 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 01:06:58.349009 kubelet[1774]: W0813 01:06:58.348949 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:58.349064 kubelet[1774]: E0813 01:06:58.349017 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:58.348000 audit[1803]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:06:58.348000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffda79cd2f0 a2=0 a3=7ffda79cd2dc items=0 ppid=1774 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 01:06:58.349000 audit[1805]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:06:58.349000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff15b39d60 a2=0 a3=7fff15b39d4c items=0 ppid=1774 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 01:06:58.353111 kubelet[1774]: I0813 01:06:58.353084 1774 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:06:58.353111 kubelet[1774]: I0813 01:06:58.353101 1774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:06:58.353111 kubelet[1774]: I0813 01:06:58.353116 1774 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:06:58.427458 kubelet[1774]: E0813 01:06:58.427410 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:06:58.443891 kubelet[1774]: E0813 01:06:58.443822 1774 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:06:58.527946 kubelet[1774]: E0813 01:06:58.527907 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:06:58.533389 kubelet[1774]: E0813 01:06:58.533347 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Aug 13 01:06:58.628951 kubelet[1774]: E0813 01:06:58.628771 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:06:58.645014 kubelet[1774]: E0813 01:06:58.644932 1774 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:06:58.729471 kubelet[1774]: E0813 01:06:58.729408 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:06:58.754365 kubelet[1774]: I0813 01:06:58.754300 1774 policy_none.go:49] "None policy: Start" Aug 13 01:06:58.755368 kubelet[1774]: I0813 01:06:58.755322 1774 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:06:58.755368 kubelet[1774]: I0813 01:06:58.755360 1774 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:06:58.764305 kubelet[1774]: I0813 01:06:58.764257 1774 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:06:58.762000 audit[1774]: AVC avc: denied { mac_admin } for pid=1774 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:06:58.762000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:06:58.762000 audit[1774]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00106fc80 a1=c001060f00 a2=c00106fc50 a3=25 items=0 ppid=1 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:06:58.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:06:58.764571 kubelet[1774]: I0813 01:06:58.764374 1774 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 01:06:58.764571 kubelet[1774]: I0813 01:06:58.764535 1774 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:06:58.764641 kubelet[1774]: I0813 01:06:58.764559 1774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:06:58.764921 kubelet[1774]: I0813 01:06:58.764893 1774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:06:58.766012 kubelet[1774]: E0813 01:06:58.765989 1774 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 01:06:58.866726 kubelet[1774]: I0813 01:06:58.866666 1774 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:06:58.867240 kubelet[1774]: E0813 01:06:58.867174 1774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Aug 13 01:06:58.934621 kubelet[1774]: E0813 01:06:58.934408 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Aug 13 01:06:59.068574 kubelet[1774]: I0813 01:06:59.068504 1774 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:06:59.069014 kubelet[1774]: E0813 01:06:59.068988 1774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Aug 13 01:06:59.132386 kubelet[1774]: I0813 01:06:59.132338 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:06:59.132711 kubelet[1774]: I0813 01:06:59.132385 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:06:59.132768 kubelet[1774]: I0813 01:06:59.132716 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:06:59.132826 kubelet[1774]: I0813 01:06:59.132763 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:06:59.132826 kubelet[1774]: I0813 01:06:59.132793 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:06:59.132826 kubelet[1774]: I0813 01:06:59.132815 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:06:59.132924 kubelet[1774]: I0813 01:06:59.132833 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:06:59.132924 kubelet[1774]: I0813 01:06:59.132880 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:06:59.132992 kubelet[1774]: I0813 01:06:59.132926 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:06:59.202207 kubelet[1774]: W0813 01:06:59.202043 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:59.202207 kubelet[1774]: E0813 01:06:59.202143 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:59.351400 kubelet[1774]: E0813 01:06:59.351338 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:06:59.351673 kubelet[1774]: E0813 01:06:59.351612 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:06:59.351974 env[1307]: time="2025-08-13T01:06:59.351920904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 01:06:59.352527 env[1307]: time="2025-08-13T01:06:59.352398283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0acdfd6fde62b5b7899736c855c08955,Namespace:kube-system,Attempt:0,}" Aug 13 01:06:59.356824 kubelet[1774]: E0813 01:06:59.356759 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:06:59.357319 env[1307]: time="2025-08-13T01:06:59.357286245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 01:06:59.445667 kubelet[1774]: W0813 01:06:59.445551 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:59.445667 kubelet[1774]: E0813 01:06:59.445662 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:59.471106 kubelet[1774]: I0813 01:06:59.471012 1774 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:06:59.471557 kubelet[1774]: E0813 01:06:59.471513 1774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Aug 13 01:06:59.735261 kubelet[1774]: E0813 01:06:59.735143 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Aug 13 01:06:59.762623 kubelet[1774]: W0813 01:06:59.762555 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:59.762702 kubelet[1774]: E0813 01:06:59.762642 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:59.818026 kubelet[1774]: W0813 01:06:59.817953 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:06:59.818026 kubelet[1774]: E0813 01:06:59.818026 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:06:59.870755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170623206.mount: Deactivated successfully. Aug 13 01:06:59.875161 env[1307]: time="2025-08-13T01:06:59.875126883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.877776 env[1307]: time="2025-08-13T01:06:59.877748076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.878749 env[1307]: time="2025-08-13T01:06:59.878718930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.880461 env[1307]: time="2025-08-13T01:06:59.880439680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.882128 env[1307]: time="2025-08-13T01:06:59.882078788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.883187 env[1307]: time="2025-08-13T01:06:59.883155221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.884513 env[1307]: time="2025-08-13T01:06:59.884479224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.885846 env[1307]: time="2025-08-13T01:06:59.885813063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.887844 env[1307]: time="2025-08-13T01:06:59.887811878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.889015 env[1307]: time="2025-08-13T01:06:59.888989790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.890293 env[1307]: time="2025-08-13T01:06:59.890273191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:06:59.891024 env[1307]: time="2025-08-13T01:06:59.890994230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:00.160878 env[1307]: time="2025-08-13T01:07:00.160112916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:00.160878 env[1307]: time="2025-08-13T01:07:00.160167815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:00.160878 env[1307]: time="2025-08-13T01:07:00.160178513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:00.160878 env[1307]: time="2025-08-13T01:07:00.160354989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d81fe15a62d0e1e78d18df8ccaefdf0195890e6570f266c3d67a200959762c82 pid=1815 runtime=io.containerd.runc.v2 Aug 13 01:07:00.164404 env[1307]: time="2025-08-13T01:07:00.164224286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:00.164404 env[1307]: time="2025-08-13T01:07:00.164263301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:00.164404 env[1307]: time="2025-08-13T01:07:00.164275916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:00.164547 env[1307]: time="2025-08-13T01:07:00.164425509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4aee8b387b18f467715399e3ae63bede9cba4985c491a36b7b4e714987243d4 pid=1837 runtime=io.containerd.runc.v2 Aug 13 01:07:00.166165 env[1307]: time="2025-08-13T01:07:00.166111116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:00.166248 env[1307]: time="2025-08-13T01:07:00.166144115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:00.166248 env[1307]: time="2025-08-13T01:07:00.166179190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:00.166477 env[1307]: time="2025-08-13T01:07:00.166386840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ab1c0b77d289f61f01de2a94a406b23b3b903b36c769a624e878a525e45510b pid=1838 runtime=io.containerd.runc.v2 Aug 13 01:07:00.529910 kubelet[1774]: E0813 01:07:00.529720 1774 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:07:00.531671 kubelet[1774]: I0813 01:07:00.531332 1774 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:07:00.531671 kubelet[1774]: E0813 01:07:00.531617 1774 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Aug 13 01:07:00.800875 env[1307]: time="2025-08-13T01:07:00.800488922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab1c0b77d289f61f01de2a94a406b23b3b903b36c769a624e878a525e45510b\"" Aug 13 01:07:00.802881 kubelet[1774]: E0813 01:07:00.802858 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:00.805293 env[1307]: time="2025-08-13T01:07:00.805257174Z" level=info msg="CreateContainer within sandbox \"4ab1c0b77d289f61f01de2a94a406b23b3b903b36c769a624e878a525e45510b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:07:00.809152 env[1307]: time="2025-08-13T01:07:00.809095268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0acdfd6fde62b5b7899736c855c08955,Namespace:kube-system,Attempt:0,} returns sandbox id \"d81fe15a62d0e1e78d18df8ccaefdf0195890e6570f266c3d67a200959762c82\"" Aug 13 01:07:00.809958 kubelet[1774]: E0813 01:07:00.809936 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:00.811549 env[1307]: time="2025-08-13T01:07:00.811521112Z" level=info msg="CreateContainer within sandbox \"d81fe15a62d0e1e78d18df8ccaefdf0195890e6570f266c3d67a200959762c82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:07:00.817343 env[1307]: time="2025-08-13T01:07:00.817313044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4aee8b387b18f467715399e3ae63bede9cba4985c491a36b7b4e714987243d4\"" Aug 13 01:07:00.817903 kubelet[1774]: E0813 01:07:00.817877 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:00.819479 env[1307]: time="2025-08-13T01:07:00.819446670Z" level=info msg="CreateContainer within sandbox \"b4aee8b387b18f467715399e3ae63bede9cba4985c491a36b7b4e714987243d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:07:01.276882 kubelet[1774]: W0813 01:07:01.276711 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Aug 13 01:07:01.276882 kubelet[1774]: E0813 01:07:01.276774 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:07:01.333969 env[1307]: time="2025-08-13T01:07:01.333864100Z" level=info msg="CreateContainer within sandbox \"4ab1c0b77d289f61f01de2a94a406b23b3b903b36c769a624e878a525e45510b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d837a96b56bd768332f9a48d44222bda1a6fa125812b2b019f36798691839af\"" Aug 13 01:07:01.334562 env[1307]: time="2025-08-13T01:07:01.334521504Z" level=info msg="StartContainer for \"4d837a96b56bd768332f9a48d44222bda1a6fa125812b2b019f36798691839af\"" Aug 13 01:07:01.336608 kubelet[1774]: E0813 01:07:01.336514 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="3.2s" Aug 13 01:07:01.336802 env[1307]: time="2025-08-13T01:07:01.336558831Z" level=info msg="CreateContainer within sandbox \"b4aee8b387b18f467715399e3ae63bede9cba4985c491a36b7b4e714987243d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d63d445976bc37e18bbd4271e6a67203d07a5d5f8eed4290ed4060bcb2ffd66c\"" Aug 13 01:07:01.337070 env[1307]: time="2025-08-13T01:07:01.337023997Z" level=info msg="StartContainer for \"d63d445976bc37e18bbd4271e6a67203d07a5d5f8eed4290ed4060bcb2ffd66c\"" Aug 13 01:07:01.338519 env[1307]: time="2025-08-13T01:07:01.338473794Z" level=info msg="CreateContainer within sandbox \"d81fe15a62d0e1e78d18df8ccaefdf0195890e6570f266c3d67a200959762c82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"610067a5724bf74aaa9845f2f523df339dcec1a85cfe9b396f91588afe9b9b9c\"" Aug 13 01:07:01.339062 env[1307]: time="2025-08-13T01:07:01.339023663Z" level=info msg="StartContainer for \"610067a5724bf74aaa9845f2f523df339dcec1a85cfe9b396f91588afe9b9b9c\"" Aug 13 01:07:01.403533 env[1307]: time="2025-08-13T01:07:01.403489261Z" level=info msg="StartContainer for \"4d837a96b56bd768332f9a48d44222bda1a6fa125812b2b019f36798691839af\" returns successfully" Aug 13 01:07:01.421351 env[1307]: time="2025-08-13T01:07:01.421303061Z" level=info msg="StartContainer for \"610067a5724bf74aaa9845f2f523df339dcec1a85cfe9b396f91588afe9b9b9c\" returns successfully" Aug 13 01:07:01.421715 env[1307]: time="2025-08-13T01:07:01.421603087Z" level=info msg="StartContainer for \"d63d445976bc37e18bbd4271e6a67203d07a5d5f8eed4290ed4060bcb2ffd66c\" returns successfully" Aug 13 01:07:01.537549 kubelet[1774]: E0813 01:07:01.537410 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:01.538868 kubelet[1774]: E0813 01:07:01.538848 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:01.540456 kubelet[1774]: E0813 01:07:01.540384 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:02.134329 kubelet[1774]: I0813 01:07:02.133795 1774 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:07:02.543321 kubelet[1774]: E0813 01:07:02.543180 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:02.960594 kubelet[1774]: I0813 01:07:02.960426 1774 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:07:02.960594 kubelet[1774]: E0813 01:07:02.960475 1774 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 01:07:03.531006 kubelet[1774]: I0813 01:07:03.530938 1774 apiserver.go:52] "Watching apiserver" Aug 13 01:07:03.627423 kubelet[1774]: I0813 01:07:03.627337 1774 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:07:06.109561 update_engine[1291]: I0813 01:07:06.109484 1291 update_attempter.cc:509] Updating boot flags... Aug 13 01:07:07.440405 kubelet[1774]: E0813 01:07:07.440351 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:07.550799 kubelet[1774]: E0813 01:07:07.550746 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:08.631704 kubelet[1774]: I0813 01:07:08.631633 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6316015419999998 podStartE2EDuration="1.631601542s" podCreationTimestamp="2025-08-13 01:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:08.432617487 +0000 UTC m=+10.442701224" watchObservedRunningTime="2025-08-13 01:07:08.631601542 +0000 UTC m=+10.641685279" Aug 13 01:07:08.632445 kubelet[1774]: E0813 01:07:08.632422 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:08.924020 systemd[1]: Reloading. Aug 13 01:07:08.984505 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2025-08-13T01:07:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:07:08.984535 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2025-08-13T01:07:08Z" level=info msg="torcx already run" Aug 13 01:07:09.062243 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:07:09.062258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:07:09.081512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:07:09.157617 systemd[1]: Stopping kubelet.service... Aug 13 01:07:09.183895 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:07:09.184218 systemd[1]: Stopped kubelet.service. Aug 13 01:07:09.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:09.185138 kernel: kauditd_printk_skb: 47 callbacks suppressed Aug 13 01:07:09.185196 kernel: audit: type=1131 audit(1755047229.183:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:09.185847 systemd[1]: Starting kubelet.service... Aug 13 01:07:09.350079 systemd[1]: Started kubelet.service. Aug 13 01:07:09.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:09.355643 kernel: audit: type=1130 audit(1755047229.349:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:09.401259 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:07:09.401259 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:07:09.401259 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:07:09.401722 kubelet[2136]: I0813 01:07:09.401326 2136 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:07:09.406872 kubelet[2136]: I0813 01:07:09.406823 2136 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:07:09.406872 kubelet[2136]: I0813 01:07:09.406854 2136 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:07:09.407103 kubelet[2136]: I0813 01:07:09.407089 2136 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:07:09.408230 kubelet[2136]: I0813 01:07:09.408207 2136 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:07:09.410962 kubelet[2136]: I0813 01:07:09.410921 2136 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:07:09.414099 kubelet[2136]: E0813 01:07:09.414072 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:07:09.414203 kubelet[2136]: I0813 01:07:09.414186 2136 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:07:09.418244 kubelet[2136]: I0813 01:07:09.418227 2136 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:07:09.418700 kubelet[2136]: I0813 01:07:09.418675 2136 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:07:09.418843 kubelet[2136]: I0813 01:07:09.418778 2136 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:07:09.418983 kubelet[2136]: I0813 01:07:09.418804 2136 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:07:09.419097 kubelet[2136]: I0813 01:07:09.418996 2136 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:07:09.419097 kubelet[2136]: I0813 01:07:09.419005 2136 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:07:09.419097 kubelet[2136]: I0813 01:07:09.419034 2136 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:07:09.419216 kubelet[2136]: I0813 01:07:09.419138 2136 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:07:09.419216 kubelet[2136]: I0813 01:07:09.419152 2136 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:07:09.419667 kubelet[2136]: I0813 01:07:09.419555 2136 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:07:09.419667 kubelet[2136]: I0813 01:07:09.419607 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:07:09.420384 kubelet[2136]: I0813 01:07:09.420363 2136 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:07:09.420779 kubelet[2136]: I0813 01:07:09.420761 2136 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:07:09.421282 kubelet[2136]: I0813 01:07:09.421256 2136 server.go:1274] "Started kubelet" Aug 13 01:07:09.422000 audit[2136]: AVC avc: denied { mac_admin } for pid=2136 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:09.422973 kubelet[2136]: I0813 01:07:09.422893 2136 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 01:07:09.422973 kubelet[2136]: I0813 01:07:09.422924 2136 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 01:07:09.422973 kubelet[2136]: I0813 01:07:09.422955 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:07:09.434611 kernel: audit: type=1400 audit(1755047229.422:223): avc: denied { mac_admin } for pid=2136 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:09.434751 kernel: audit: type=1401 audit(1755047229.422:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:07:09.434772 kernel: audit: type=1300 audit(1755047229.422:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000b4e2a0 a1=c000978b88 a2=c000b4e270 a3=25 items=0 ppid=1 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:09.434798 kernel: audit: type=1327 audit(1755047229.422:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:07:09.422000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:07:09.422000 audit[2136]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b4e2a0 a1=c000978b88 a2=c000b4e270 a3=25 items=0 ppid=1 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:09.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.428214 2136 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:07:09.435904 kubelet[2136]: E0813 01:07:09.428473 2136 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.428885 2136 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.429095 2136 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.429974 2136 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.431393 2136 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.433229 2136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.433516 2136 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:07:09.435904 kubelet[2136]: I0813 01:07:09.433915 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:07:09.442399 kernel: audit: type=1400 audit(1755047229.422:224): avc: denied { mac_admin } for pid=2136 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:09.422000 audit[2136]: AVC avc: denied { mac_admin } for pid=2136 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:09.442613 kubelet[2136]: I0813 01:07:09.438437 2136 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:07:09.442613 kubelet[2136]: I0813 01:07:09.438539 2136 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:07:09.442613 kubelet[2136]: I0813 01:07:09.440639 2136 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:07:09.422000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:07:09.422000 audit[2136]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00040f2a0 a1=c000978ba0 a2=c000b4e330 a3=25 items=0 ppid=1 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:09.445269 kubelet[2136]: E0813 01:07:09.445248 2136 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:07:09.448826 kernel: audit: type=1401 audit(1755047229.422:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:07:09.448871 kernel: audit: type=1300 audit(1755047229.422:224): arch=c000003e syscall=188 success=no exit=-22 a0=c00040f2a0 a1=c000978ba0 a2=c000b4e330 a3=25 items=0 ppid=1 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:09.450436 kubelet[2136]: I0813 01:07:09.450389 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:07:09.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:07:09.454605 kernel: audit: type=1327 audit(1755047229.422:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:07:09.455485 kubelet[2136]: I0813 01:07:09.455382 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:07:09.455485 kubelet[2136]: I0813 01:07:09.455416 2136 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:07:09.455485 kubelet[2136]: I0813 01:07:09.455437 2136 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:07:09.455665 kubelet[2136]: E0813 01:07:09.455490 2136 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:07:09.482059 kubelet[2136]: I0813 01:07:09.482012 2136 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:07:09.482059 kubelet[2136]: I0813 01:07:09.482035 2136 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:07:09.482059 kubelet[2136]: I0813 01:07:09.482066 2136 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:07:09.482288 kubelet[2136]: I0813 01:07:09.482223 2136 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:07:09.482288 kubelet[2136]: I0813 01:07:09.482233 2136 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:07:09.482288 kubelet[2136]: I0813 01:07:09.482255 2136 policy_none.go:49] "None policy: Start" Aug 13 01:07:09.482940 kubelet[2136]: I0813 01:07:09.482909 2136 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:07:09.482940 kubelet[2136]: I0813 01:07:09.482942 2136 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:07:09.483121 kubelet[2136]: I0813 01:07:09.483098 2136 state_mem.go:75] "Updated machine memory state" Aug 13 01:07:09.484142 kubelet[2136]: I0813 01:07:09.484117 2136 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:07:09.483000 audit[2136]: AVC avc: denied { mac_admin } for pid=2136 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:09.483000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:07:09.483000 audit[2136]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d62f90 a1=c000f8b1a0 a2=c000d62f60 a3=25 items=0 ppid=1 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:09.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:07:09.484547 kubelet[2136]: I0813 01:07:09.484239 2136 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 01:07:09.484547 kubelet[2136]: I0813 01:07:09.484395 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:07:09.484547 kubelet[2136]: I0813 01:07:09.484407 2136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:07:09.484679 kubelet[2136]: I0813 01:07:09.484604 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:07:09.564401 kubelet[2136]: E0813 01:07:09.564346 2136 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.564702 kubelet[2136]: E0813 01:07:09.564670 2136 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 01:07:09.588377 kubelet[2136]: I0813 01:07:09.588335 2136 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:07:09.598847 kubelet[2136]: I0813 01:07:09.598807 2136 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 01:07:09.599014 kubelet[2136]: I0813 01:07:09.598912 2136 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:07:09.630267 kubelet[2136]: I0813 01:07:09.630222 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:07:09.630267 kubelet[2136]: I0813 01:07:09.630257 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.630267 kubelet[2136]: I0813 01:07:09.630275 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.630497 kubelet[2136]: I0813 01:07:09.630290 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.630497 kubelet[2136]: I0813 01:07:09.630308 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:07:09.630497 kubelet[2136]: I0813 01:07:09.630321 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:07:09.630497 kubelet[2136]: I0813 01:07:09.630343 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0acdfd6fde62b5b7899736c855c08955-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0acdfd6fde62b5b7899736c855c08955\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:07:09.630497 kubelet[2136]: I0813 01:07:09.630416 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.630671 kubelet[2136]: I0813 01:07:09.630476 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:07:09.863170 kubelet[2136]: E0813 01:07:09.863136 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:09.865488 kubelet[2136]: E0813 01:07:09.865435 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:09.865488 kubelet[2136]: E0813 01:07:09.865500 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:10.420675 kubelet[2136]: I0813 01:07:10.420633 2136 apiserver.go:52] "Watching apiserver" Aug 13 01:07:10.429872 kubelet[2136]: I0813 01:07:10.429835 2136 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:07:10.463570 kubelet[2136]: E0813 01:07:10.463532 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:10.463570 kubelet[2136]: E0813 01:07:10.463532 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:10.463906 kubelet[2136]: E0813 01:07:10.463885 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:10.505395 kubelet[2136]: I0813 01:07:10.505312 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.505291337 podStartE2EDuration="1.505291337s" podCreationTimestamp="2025-08-13 01:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:10.504983217 +0000 UTC m=+1.150113176" watchObservedRunningTime="2025-08-13 01:07:10.505291337 +0000 UTC m=+1.150421296" Aug 13 01:07:10.524648 kubelet[2136]: I0813 01:07:10.524560 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.524533179 podStartE2EDuration="2.524533179s" podCreationTimestamp="2025-08-13 01:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:10.515948195 +0000 UTC m=+1.161078144" watchObservedRunningTime="2025-08-13 01:07:10.524533179 +0000 UTC m=+1.169663138" Aug 13 01:07:11.465210 kubelet[2136]: E0813 01:07:11.465145 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:11.465761 kubelet[2136]: E0813 01:07:11.465248 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:12.467079 kubelet[2136]: E0813 01:07:12.467024 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:13.736038 kubelet[2136]: I0813 01:07:13.736000 2136 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:07:13.736827 env[1307]: time="2025-08-13T01:07:13.736738062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:07:13.737097 kubelet[2136]: I0813 01:07:13.737066 2136 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:07:14.864832 kubelet[2136]: I0813 01:07:14.864795 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzq5\" (UniqueName: \"kubernetes.io/projected/151148dc-b465-4908-8698-78cb86465f71-kube-api-access-sxzq5\") pod \"kube-proxy-nkdtt\" (UID: \"151148dc-b465-4908-8698-78cb86465f71\") " pod="kube-system/kube-proxy-nkdtt" Aug 13 01:07:14.864832 kubelet[2136]: I0813 01:07:14.864828 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/151148dc-b465-4908-8698-78cb86465f71-kube-proxy\") pod \"kube-proxy-nkdtt\" (UID: \"151148dc-b465-4908-8698-78cb86465f71\") " pod="kube-system/kube-proxy-nkdtt" Aug 13 01:07:14.864832 kubelet[2136]: I0813 01:07:14.864846 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151148dc-b465-4908-8698-78cb86465f71-xtables-lock\") pod \"kube-proxy-nkdtt\" (UID: \"151148dc-b465-4908-8698-78cb86465f71\") " pod="kube-system/kube-proxy-nkdtt" Aug 13 01:07:14.865271 kubelet[2136]: I0813 01:07:14.864860 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151148dc-b465-4908-8698-78cb86465f71-lib-modules\") pod \"kube-proxy-nkdtt\" (UID: \"151148dc-b465-4908-8698-78cb86465f71\") " pod="kube-system/kube-proxy-nkdtt" Aug 13 01:07:14.965720 kubelet[2136]: I0813 01:07:14.965655 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0baef0e2-ad82-4cfc-be4f-276437cd8cd2-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-6fcsn\" (UID: \"0baef0e2-ad82-4cfc-be4f-276437cd8cd2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6fcsn" Aug 13 01:07:14.965878 kubelet[2136]: I0813 01:07:14.965738 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kdn\" (UniqueName: \"kubernetes.io/projected/0baef0e2-ad82-4cfc-be4f-276437cd8cd2-kube-api-access-69kdn\") pod \"tigera-operator-5bf8dfcb4-6fcsn\" (UID: \"0baef0e2-ad82-4cfc-be4f-276437cd8cd2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6fcsn" Aug 13 01:07:14.970762 kubelet[2136]: I0813 01:07:14.970726 2136 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:07:15.091064 kubelet[2136]: E0813 01:07:15.091025 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:15.091531 env[1307]: time="2025-08-13T01:07:15.091483938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkdtt,Uid:151148dc-b465-4908-8698-78cb86465f71,Namespace:kube-system,Attempt:0,}" Aug 13 01:07:15.200954 env[1307]: time="2025-08-13T01:07:15.200803090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:15.200954 env[1307]: time="2025-08-13T01:07:15.200850738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:15.200954 env[1307]: time="2025-08-13T01:07:15.200861172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:15.201157 env[1307]: time="2025-08-13T01:07:15.200996959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3c785918fc371f156550344cd9598cb39765df55b414c11eecaddd5571d30f0 pid=2193 runtime=io.containerd.runc.v2 Aug 13 01:07:15.205021 env[1307]: time="2025-08-13T01:07:15.202627335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6fcsn,Uid:0baef0e2-ad82-4cfc-be4f-276437cd8cd2,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:07:15.220811 env[1307]: time="2025-08-13T01:07:15.220726724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:15.220811 env[1307]: time="2025-08-13T01:07:15.220766214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:15.221043 env[1307]: time="2025-08-13T01:07:15.220780235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:15.221530 env[1307]: time="2025-08-13T01:07:15.221232120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8bd01bda5519432b3e80f3808e5f3dcea445ef6a37f3f9895b3bba13c875fbf pid=2226 runtime=io.containerd.runc.v2 Aug 13 01:07:15.234386 env[1307]: time="2025-08-13T01:07:15.233717964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkdtt,Uid:151148dc-b465-4908-8698-78cb86465f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3c785918fc371f156550344cd9598cb39765df55b414c11eecaddd5571d30f0\"" Aug 13 01:07:15.234561 kubelet[2136]: E0813 01:07:15.234309 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:15.237945 env[1307]: time="2025-08-13T01:07:15.237913825Z" level=info msg="CreateContainer within sandbox \"f3c785918fc371f156550344cd9598cb39765df55b414c11eecaddd5571d30f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:07:15.255727 env[1307]: time="2025-08-13T01:07:15.255687636Z" level=info msg="CreateContainer within sandbox \"f3c785918fc371f156550344cd9598cb39765df55b414c11eecaddd5571d30f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42846eeb7ca0b5e6ad74e1ecb42fbed2d6580dc9d7f6feff3a684cd953de1ecc\"" Aug 13 01:07:15.258011 env[1307]: time="2025-08-13T01:07:15.257746162Z" level=info msg="StartContainer for \"42846eeb7ca0b5e6ad74e1ecb42fbed2d6580dc9d7f6feff3a684cd953de1ecc\"" Aug 13 01:07:15.267377 env[1307]: time="2025-08-13T01:07:15.267332584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6fcsn,Uid:0baef0e2-ad82-4cfc-be4f-276437cd8cd2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a8bd01bda5519432b3e80f3808e5f3dcea445ef6a37f3f9895b3bba13c875fbf\"" Aug 13 01:07:15.269526 env[1307]: time="2025-08-13T01:07:15.269037708Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:07:15.312804 env[1307]: time="2025-08-13T01:07:15.312727808Z" level=info msg="StartContainer for \"42846eeb7ca0b5e6ad74e1ecb42fbed2d6580dc9d7f6feff3a684cd953de1ecc\" returns successfully" Aug 13 01:07:15.412639 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 01:07:15.412787 kernel: audit: type=1325 audit(1755047235.408:226): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.408000 audit[2336]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.408000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd62767860 a2=0 a3=7ffd6276784c items=0 ppid=2285 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.417265 kernel: audit: type=1300 audit(1755047235.408:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd62767860 a2=0 a3=7ffd6276784c items=0 ppid=2285 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:07:15.419613 kernel: audit: type=1327 audit(1755047235.408:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:07:15.419659 kernel: audit: type=1325 audit(1755047235.408:227): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.408000 audit[2337]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.408000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc927193e0 a2=0 a3=7ffc927193cc items=0 ppid=2285 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.426131 kernel: audit: type=1300 audit(1755047235.408:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc927193e0 a2=0 a3=7ffc927193cc items=0 ppid=2285 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.426196 kernel: audit: type=1327 audit(1755047235.408:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:07:15.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:07:15.428237 kernel: audit: type=1325 audit(1755047235.409:228): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.409000 audit[2338]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.430287 kernel: audit: type=1300 audit(1755047235.409:228): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee07cfb70 a2=0 a3=7ffee07cfb5c items=0 ppid=2285 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.409000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee07cfb70 a2=0 a3=7ffee07cfb5c items=0 ppid=2285 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.434615 kernel: audit: type=1327 audit(1755047235.409:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:07:15.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:07:15.410000 audit[2339]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.438746 kernel: audit: type=1325 audit(1755047235.410:229): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.410000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff6aa810 a2=0 a3=7ffcff6aa7fc items=0 ppid=2285 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 01:07:15.413000 audit[2340]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.413000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce08e0650 a2=0 a3=7ffce08e063c items=0 ppid=2285 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:07:15.415000 audit[2341]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.415000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc0a76cf0 a2=0 a3=7ffcc0a76cdc items=0 ppid=2285 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 01:07:15.474207 kubelet[2136]: E0813 01:07:15.474130 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:15.484541 kubelet[2136]: I0813 01:07:15.484480 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nkdtt" podStartSLOduration=1.484455963 podStartE2EDuration="1.484455963s" podCreationTimestamp="2025-08-13 01:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:15.484059815 +0000 UTC m=+6.129189774" watchObservedRunningTime="2025-08-13 01:07:15.484455963 +0000 UTC m=+6.129585922" Aug 13 01:07:15.513000 audit[2342]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.513000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffee222d090 a2=0 a3=7ffee222d07c items=0 ppid=2285 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 01:07:15.516000 audit[2344]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.516000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff45087710 a2=0 a3=7fff450876fc items=0 ppid=2285 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 01:07:15.519000 audit[2347]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.519000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc2c3fd090 a2=0 a3=7ffc2c3fd07c items=0 ppid=2285 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 01:07:15.520000 audit[2348]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.520000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb51332a0 a2=0 a3=7fffb513328c items=0 ppid=2285 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 01:07:15.522000 audit[2350]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.522000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5fed9d40 a2=0 a3=7ffe5fed9d2c items=0 ppid=2285 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 01:07:15.523000 audit[2351]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.523000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3b801030 a2=0 a3=7ffe3b80101c items=0 ppid=2285 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 01:07:15.525000 audit[2353]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.525000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff84015ad0 a2=0 a3=7fff84015abc items=0 ppid=2285 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 01:07:15.528000 audit[2356]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.528000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffee8b31910 a2=0 a3=7ffee8b318fc items=0 ppid=2285 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 01:07:15.529000 audit[2357]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.529000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff46978c50 a2=0 a3=7fff46978c3c items=0 ppid=2285 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 01:07:15.531000 audit[2359]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.531000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec63e2610 a2=0 a3=7ffec63e25fc items=0 ppid=2285 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 01:07:15.532000 audit[2360]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.532000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc08882970 a2=0 a3=7ffc0888295c items=0 ppid=2285 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 01:07:15.534000 audit[2362]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.534000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeef5f4ff0 a2=0 a3=7ffeef5f4fdc items=0 ppid=2285 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:07:15.537000 audit[2365]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.537000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf2998030 a2=0 a3=7ffcf299801c items=0 ppid=2285 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:07:15.541000 audit[2368]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.541000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed358c0d0 a2=0 a3=7ffed358c0bc items=0 ppid=2285 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 01:07:15.541000 audit[2369]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.541000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4403c480 a2=0 a3=7ffe4403c46c items=0 ppid=2285 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 01:07:15.543000 audit[2371]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.543000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcdf6a8680 a2=0 a3=7ffcdf6a866c items=0 ppid=2285 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:07:15.546000 audit[2374]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.546000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffebca54e30 a2=0 a3=7ffebca54e1c items=0 ppid=2285 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:07:15.547000 audit[2375]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.547000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9a6f71a0 a2=0 a3=7ffe9a6f718c items=0 ppid=2285 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 01:07:15.549000 audit[2377]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:07:15.549000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc70ad7ae0 a2=0 a3=7ffc70ad7acc items=0 ppid=2285 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 01:07:15.568000 audit[2383]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:15.568000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd032f10b0 a2=0 a3=7ffd032f109c items=0 ppid=2285 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:15.579000 audit[2383]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:15.579000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd032f10b0 a2=0 a3=7ffd032f109c items=0 ppid=2285 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:15.580000 audit[2388]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.580000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde3725d20 a2=0 a3=7ffde3725d0c items=0 ppid=2285 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 01:07:15.582000 audit[2390]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.582000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffde7700790 a2=0 a3=7ffde770077c items=0 ppid=2285 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 01:07:15.585000 audit[2393]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.585000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe03a39ff0 a2=0 a3=7ffe03a39fdc items=0 ppid=2285 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 01:07:15.586000 audit[2394]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.586000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9bad2a20 a2=0 a3=7ffe9bad2a0c items=0 ppid=2285 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 01:07:15.588000 audit[2396]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.588000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffa167c30 a2=0 a3=7ffffa167c1c items=0 ppid=2285 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 01:07:15.589000 audit[2397]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.589000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd00d1a250 a2=0 a3=7ffd00d1a23c items=0 ppid=2285 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 01:07:15.591000 audit[2399]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.591000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb069a0f0 a2=0 a3=7fffb069a0dc items=0 ppid=2285 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 01:07:15.594000 audit[2402]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.594000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff43818fd0 a2=0 a3=7fff43818fbc items=0 ppid=2285 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 01:07:15.595000 audit[2403]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.595000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9c33a7e0 a2=0 a3=7ffe9c33a7cc items=0 ppid=2285 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 01:07:15.597000 audit[2405]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.597000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe869f32e0 a2=0 a3=7ffe869f32cc items=0 ppid=2285 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 01:07:15.598000 audit[2406]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.598000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc953e3f0 a2=0 a3=7ffdc953e3dc items=0 ppid=2285 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 01:07:15.600000 audit[2408]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.600000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7c41ae80 a2=0 a3=7ffc7c41ae6c items=0 ppid=2285 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:07:15.603000 audit[2411]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.603000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdecae08f0 a2=0 a3=7ffdecae08dc items=0 ppid=2285 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 01:07:15.606000 audit[2414]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.606000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc10a296d0 a2=0 a3=7ffc10a296bc items=0 ppid=2285 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 01:07:15.606000 audit[2415]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.606000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5ac97360 a2=0 a3=7fff5ac9734c items=0 ppid=2285 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 01:07:15.608000 audit[2417]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.608000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe613e52c0 a2=0 a3=7ffe613e52ac items=0 ppid=2285 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.608000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:07:15.611000 audit[2420]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.611000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd65acf450 a2=0 a3=7ffd65acf43c items=0 ppid=2285 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:07:15.612000 audit[2421]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.612000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd966a7a0 a2=0 a3=7fffd966a78c items=0 ppid=2285 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 01:07:15.614000 audit[2423]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.614000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc6b904630 a2=0 a3=7ffc6b90461c items=0 ppid=2285 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 01:07:15.615000 audit[2424]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.615000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1abaf90 a2=0 a3=7ffdb1abaf7c items=0 ppid=2285 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 01:07:15.617000 audit[2426]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.617000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff731143d0 a2=0 a3=7fff731143bc items=0 ppid=2285 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:07:15.621000 audit[2429]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:07:15.621000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0db69830 a2=0 a3=7ffc0db6981c items=0 ppid=2285 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:07:15.624000 audit[2431]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 01:07:15.624000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff49870a20 a2=0 a3=7fff49870a0c items=0 ppid=2285 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.624000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:15.624000 audit[2431]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 01:07:15.624000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff49870a20 a2=0 a3=7fff49870a0c items=0 ppid=2285 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:15.624000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:15.937900 kubelet[2136]: E0813 01:07:15.937865 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:16.475943 kubelet[2136]: E0813 01:07:16.475871 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:16.502446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915576009.mount: Deactivated successfully. Aug 13 01:07:17.477398 kubelet[2136]: E0813 01:07:17.477363 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:17.623289 env[1307]: time="2025-08-13T01:07:17.623227253Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:17.624968 env[1307]: time="2025-08-13T01:07:17.624934759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:17.626547 env[1307]: time="2025-08-13T01:07:17.626484803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:17.627987 env[1307]: time="2025-08-13T01:07:17.627953827Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:17.628572 env[1307]: time="2025-08-13T01:07:17.628536719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:07:17.630697 env[1307]: time="2025-08-13T01:07:17.630642325Z" level=info msg="CreateContainer within sandbox \"a8bd01bda5519432b3e80f3808e5f3dcea445ef6a37f3f9895b3bba13c875fbf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:07:17.642273 env[1307]: time="2025-08-13T01:07:17.642225883Z" level=info msg="CreateContainer within sandbox \"a8bd01bda5519432b3e80f3808e5f3dcea445ef6a37f3f9895b3bba13c875fbf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40980070bc399b62b431e70630fe1327297c8e11c30aeb8e8208e57b0e284e64\"" Aug 13 01:07:17.642839 env[1307]: time="2025-08-13T01:07:17.642789202Z" level=info msg="StartContainer for \"40980070bc399b62b431e70630fe1327297c8e11c30aeb8e8208e57b0e284e64\"" Aug 13 01:07:17.682758 env[1307]: time="2025-08-13T01:07:17.682707948Z" level=info msg="StartContainer for \"40980070bc399b62b431e70630fe1327297c8e11c30aeb8e8208e57b0e284e64\" returns successfully" Aug 13 01:07:19.493939 kubelet[2136]: E0813 01:07:19.493896 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:19.528166 kubelet[2136]: I0813 01:07:19.528073 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-6fcsn" podStartSLOduration=3.16717663 podStartE2EDuration="5.528050781s" podCreationTimestamp="2025-08-13 01:07:14 +0000 UTC" firstStartedPulling="2025-08-13 01:07:15.268554222 +0000 UTC m=+5.913684181" lastFinishedPulling="2025-08-13 01:07:17.629428372 +0000 UTC m=+8.274558332" observedRunningTime="2025-08-13 01:07:18.488819185 +0000 UTC m=+9.133949174" watchObservedRunningTime="2025-08-13 01:07:19.528050781 +0000 UTC m=+10.173180740" Aug 13 01:07:20.488043 kubelet[2136]: E0813 01:07:20.487996 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:23.272639 sudo[1470]: pam_unix(sudo:session): session closed for user root Aug 13 01:07:23.272000 audit[1470]: USER_END pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.276849 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 01:07:23.276991 kernel: audit: type=1106 audit(1755047243.272:277): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.280346 kernel: audit: type=1104 audit(1755047243.276:278): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.276000 audit[1470]: CRED_DISP pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.282574 sshd[1464]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:23.284000 audit[1464]: USER_END pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:23.293006 kernel: audit: type=1106 audit(1755047243.284:279): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:23.293100 kernel: audit: type=1104 audit(1755047243.288:280): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:23.293132 kernel: audit: type=1131 audit(1755047243.290:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.139:22-10.0.0.1:49724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.288000 audit[1464]: CRED_DISP pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:23.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.139:22-10.0.0.1:49724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:23.291034 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:49724.service: Deactivated successfully. Aug 13 01:07:23.291987 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:07:23.294207 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:07:23.295418 systemd-logind[1290]: Removed session 7. Aug 13 01:07:23.527000 audit[2525]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.527000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9800eff0 a2=0 a3=7ffe9800efdc items=0 ppid=2285 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.536676 kernel: audit: type=1325 audit(1755047243.527:282): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.536762 kernel: audit: type=1300 audit(1755047243.527:282): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9800eff0 a2=0 a3=7ffe9800efdc items=0 ppid=2285 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:23.539240 kernel: audit: type=1327 audit(1755047243.527:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:23.538000 audit[2525]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.538000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9800eff0 a2=0 a3=0 items=0 ppid=2285 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.546665 kernel: audit: type=1325 audit(1755047243.538:283): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.546741 kernel: audit: type=1300 audit(1755047243.538:283): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9800eff0 a2=0 a3=0 items=0 ppid=2285 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:23.553000 audit[2527]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.553000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce26a3460 a2=0 a3=7ffce26a344c items=0 ppid=2285 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:23.557000 audit[2527]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:23.557000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce26a3460 a2=0 a3=0 items=0 ppid=2285 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:23.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:25.352000 audit[2529]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:25.352000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca63dc950 a2=0 a3=7ffca63dc93c items=0 ppid=2285 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:25.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:25.357000 audit[2529]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:25.357000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca63dc950 a2=0 a3=0 items=0 ppid=2285 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:25.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:25.431576 kubelet[2136]: I0813 01:07:25.431522 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/edf2a2af-f2b0-4c14-b76b-69f2204ae13e-typha-certs\") pod \"calico-typha-5bdbd956bf-28bb8\" (UID: \"edf2a2af-f2b0-4c14-b76b-69f2204ae13e\") " pod="calico-system/calico-typha-5bdbd956bf-28bb8" Aug 13 01:07:25.431576 kubelet[2136]: I0813 01:07:25.431573 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf2a2af-f2b0-4c14-b76b-69f2204ae13e-tigera-ca-bundle\") pod \"calico-typha-5bdbd956bf-28bb8\" (UID: \"edf2a2af-f2b0-4c14-b76b-69f2204ae13e\") " pod="calico-system/calico-typha-5bdbd956bf-28bb8" Aug 13 01:07:25.432125 kubelet[2136]: I0813 01:07:25.431608 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txzh\" (UniqueName: \"kubernetes.io/projected/edf2a2af-f2b0-4c14-b76b-69f2204ae13e-kube-api-access-8txzh\") pod \"calico-typha-5bdbd956bf-28bb8\" (UID: \"edf2a2af-f2b0-4c14-b76b-69f2204ae13e\") " pod="calico-system/calico-typha-5bdbd956bf-28bb8" Aug 13 01:07:25.696008 kubelet[2136]: E0813 01:07:25.695861 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:25.696603 env[1307]: time="2025-08-13T01:07:25.696510775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdbd956bf-28bb8,Uid:edf2a2af-f2b0-4c14-b76b-69f2204ae13e,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:25.719244 env[1307]: time="2025-08-13T01:07:25.719172954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:25.719244 env[1307]: time="2025-08-13T01:07:25.719216407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:25.719244 env[1307]: time="2025-08-13T01:07:25.719227300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:25.719511 env[1307]: time="2025-08-13T01:07:25.719468277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6cd1cdc2de609302ea81d79faac0c4879bf0e509aaa28ba4df1e737e924326c pid=2539 runtime=io.containerd.runc.v2 Aug 13 01:07:25.781320 env[1307]: time="2025-08-13T01:07:25.781264195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdbd956bf-28bb8,Uid:edf2a2af-f2b0-4c14-b76b-69f2204ae13e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6cd1cdc2de609302ea81d79faac0c4879bf0e509aaa28ba4df1e737e924326c\"" Aug 13 01:07:25.782014 kubelet[2136]: E0813 01:07:25.781987 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:25.783413 env[1307]: time="2025-08-13T01:07:25.783378433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:07:25.833069 kubelet[2136]: I0813 01:07:25.833016 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-lib-modules\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833069 kubelet[2136]: I0813 01:07:25.833051 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-policysync\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833069 kubelet[2136]: I0813 01:07:25.833066 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c243cae-c801-4313-b3ce-f935fa1cd8ea-node-certs\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833069 kubelet[2136]: I0813 01:07:25.833080 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n9l\" (UniqueName: \"kubernetes.io/projected/7c243cae-c801-4313-b3ce-f935fa1cd8ea-kube-api-access-84n9l\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833364 kubelet[2136]: I0813 01:07:25.833097 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-var-lib-calico\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833364 kubelet[2136]: I0813 01:07:25.833144 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-xtables-lock\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833364 kubelet[2136]: I0813 01:07:25.833205 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-cni-log-dir\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833364 kubelet[2136]: I0813 01:07:25.833265 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-flexvol-driver-host\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833364 kubelet[2136]: I0813 01:07:25.833313 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c243cae-c801-4313-b3ce-f935fa1cd8ea-tigera-ca-bundle\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833497 kubelet[2136]: I0813 01:07:25.833332 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-cni-net-dir\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833497 kubelet[2136]: I0813 01:07:25.833347 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-cni-bin-dir\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.833497 kubelet[2136]: I0813 01:07:25.833363 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c243cae-c801-4313-b3ce-f935fa1cd8ea-var-run-calico\") pod \"calico-node-blm7h\" (UID: \"7c243cae-c801-4313-b3ce-f935fa1cd8ea\") " pod="calico-system/calico-node-blm7h" Aug 13 01:07:25.936000 kubelet[2136]: E0813 01:07:25.935938 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:25.936000 kubelet[2136]: W0813 01:07:25.936001 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:25.936213 kubelet[2136]: E0813 01:07:25.936040 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:25.936782 kubelet[2136]: E0813 01:07:25.936427 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:25.936782 kubelet[2136]: W0813 01:07:25.936441 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:25.936782 kubelet[2136]: E0813 01:07:25.936451 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:25.937380 kubelet[2136]: E0813 01:07:25.937352 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:25.937380 kubelet[2136]: W0813 01:07:25.937366 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:25.937380 kubelet[2136]: E0813 01:07:25.937377 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:25.941075 kubelet[2136]: E0813 01:07:25.941049 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:25.941075 kubelet[2136]: W0813 01:07:25.941074 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:25.941172 kubelet[2136]: E0813 01:07:25.941103 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.095141 kubelet[2136]: E0813 01:07:26.095089 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:26.105965 env[1307]: time="2025-08-13T01:07:26.105903267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blm7h,Uid:7c243cae-c801-4313-b3ce-f935fa1cd8ea,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:26.122146 env[1307]: time="2025-08-13T01:07:26.122088876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:26.122306 env[1307]: time="2025-08-13T01:07:26.122128090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:26.122306 env[1307]: time="2025-08-13T01:07:26.122137871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:26.122396 env[1307]: time="2025-08-13T01:07:26.122323637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118 pid=2596 runtime=io.containerd.runc.v2 Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.129632 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.130487 kubelet[2136]: W0813 01:07:26.129664 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.129693 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.129999 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.130487 kubelet[2136]: W0813 01:07:26.130010 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.130037 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.130228 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.130487 kubelet[2136]: W0813 01:07:26.130237 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.130245 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.130487 kubelet[2136]: E0813 01:07:26.130395 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.131559 kubelet[2136]: W0813 01:07:26.130401 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.130408 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.130615 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.131559 kubelet[2136]: W0813 01:07:26.130624 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.130633 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.130973 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.131559 kubelet[2136]: W0813 01:07:26.130984 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.130994 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.131559 kubelet[2136]: E0813 01:07:26.131149 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.131559 kubelet[2136]: W0813 01:07:26.131158 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.131932 kubelet[2136]: E0813 01:07:26.131168 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.131932 kubelet[2136]: E0813 01:07:26.131352 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.131932 kubelet[2136]: W0813 01:07:26.131362 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.131932 kubelet[2136]: E0813 01:07:26.131374 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.132797 kubelet[2136]: E0813 01:07:26.132769 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.132797 kubelet[2136]: W0813 01:07:26.132789 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.132804 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.132984 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.134706 kubelet[2136]: W0813 01:07:26.133101 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.133115 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.133331 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.134706 kubelet[2136]: W0813 01:07:26.133369 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.133534 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.133732 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.134706 kubelet[2136]: W0813 01:07:26.133740 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.134706 kubelet[2136]: E0813 01:07:26.133749 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.133909 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135019 kubelet[2136]: W0813 01:07:26.133919 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.133930 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.134084 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135019 kubelet[2136]: W0813 01:07:26.134091 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.134100 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.134212 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135019 kubelet[2136]: W0813 01:07:26.134219 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.134226 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135019 kubelet[2136]: E0813 01:07:26.134328 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135254 kubelet[2136]: W0813 01:07:26.134334 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134341 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134482 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135254 kubelet[2136]: W0813 01:07:26.134490 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134498 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134644 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135254 kubelet[2136]: W0813 01:07:26.134654 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134663 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135254 kubelet[2136]: E0813 01:07:26.134861 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135254 kubelet[2136]: W0813 01:07:26.134872 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135486 kubelet[2136]: E0813 01:07:26.134883 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135486 kubelet[2136]: E0813 01:07:26.135095 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135486 kubelet[2136]: W0813 01:07:26.135107 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135486 kubelet[2136]: E0813 01:07:26.135118 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135486 kubelet[2136]: E0813 01:07:26.135398 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135486 kubelet[2136]: W0813 01:07:26.135407 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135486 kubelet[2136]: E0813 01:07:26.135416 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135486 kubelet[2136]: I0813 01:07:26.135438 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctrh\" (UniqueName: \"kubernetes.io/projected/2333ecfa-adb6-4791-8fd0-6a082b51d429-kube-api-access-bctrh\") pod \"csi-node-driver-bgm6z\" (UID: \"2333ecfa-adb6-4791-8fd0-6a082b51d429\") " pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:26.135748 kubelet[2136]: E0813 01:07:26.135638 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135748 kubelet[2136]: W0813 01:07:26.135652 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135748 kubelet[2136]: E0813 01:07:26.135663 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135748 kubelet[2136]: I0813 01:07:26.135681 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2333ecfa-adb6-4791-8fd0-6a082b51d429-socket-dir\") pod \"csi-node-driver-bgm6z\" (UID: \"2333ecfa-adb6-4791-8fd0-6a082b51d429\") " pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:26.135873 kubelet[2136]: E0813 01:07:26.135850 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.135873 kubelet[2136]: W0813 01:07:26.135868 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.135963 kubelet[2136]: E0813 01:07:26.135879 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.135963 kubelet[2136]: I0813 01:07:26.135899 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2333ecfa-adb6-4791-8fd0-6a082b51d429-registration-dir\") pod \"csi-node-driver-bgm6z\" (UID: \"2333ecfa-adb6-4791-8fd0-6a082b51d429\") " pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:26.136132 kubelet[2136]: E0813 01:07:26.136111 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.136132 kubelet[2136]: W0813 01:07:26.136127 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.136228 kubelet[2136]: E0813 01:07:26.136137 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.136228 kubelet[2136]: I0813 01:07:26.136151 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2333ecfa-adb6-4791-8fd0-6a082b51d429-kubelet-dir\") pod \"csi-node-driver-bgm6z\" (UID: \"2333ecfa-adb6-4791-8fd0-6a082b51d429\") " pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:26.136340 kubelet[2136]: E0813 01:07:26.136318 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.136340 kubelet[2136]: W0813 01:07:26.136335 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.136431 kubelet[2136]: E0813 01:07:26.136347 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.136431 kubelet[2136]: I0813 01:07:26.136365 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2333ecfa-adb6-4791-8fd0-6a082b51d429-varrun\") pod \"csi-node-driver-bgm6z\" (UID: \"2333ecfa-adb6-4791-8fd0-6a082b51d429\") " pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:26.136608 kubelet[2136]: E0813 01:07:26.136558 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.136608 kubelet[2136]: W0813 01:07:26.136573 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.136608 kubelet[2136]: E0813 01:07:26.136597 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.136818 kubelet[2136]: E0813 01:07:26.136798 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.136818 kubelet[2136]: W0813 01:07:26.136815 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.136943 kubelet[2136]: E0813 01:07:26.136893 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.137008 kubelet[2136]: E0813 01:07:26.136984 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.137008 kubelet[2136]: W0813 01:07:26.137001 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.137116 kubelet[2136]: E0813 01:07:26.137096 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.137273 kubelet[2136]: E0813 01:07:26.137250 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.137273 kubelet[2136]: W0813 01:07:26.137266 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.137654 kubelet[2136]: E0813 01:07:26.137279 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.137654 kubelet[2136]: E0813 01:07:26.137628 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.137654 kubelet[2136]: W0813 01:07:26.137640 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.137878 kubelet[2136]: E0813 01:07:26.137657 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.138187 kubelet[2136]: E0813 01:07:26.138156 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.138187 kubelet[2136]: W0813 01:07:26.138171 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.138187 kubelet[2136]: E0813 01:07:26.138185 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.138370 kubelet[2136]: E0813 01:07:26.138349 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.138370 kubelet[2136]: W0813 01:07:26.138362 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.138370 kubelet[2136]: E0813 01:07:26.138371 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.138757 kubelet[2136]: E0813 01:07:26.138735 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.138757 kubelet[2136]: W0813 01:07:26.138752 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.138844 kubelet[2136]: E0813 01:07:26.138765 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.139650 kubelet[2136]: E0813 01:07:26.139005 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.139650 kubelet[2136]: W0813 01:07:26.139022 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.139650 kubelet[2136]: E0813 01:07:26.139046 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.139650 kubelet[2136]: E0813 01:07:26.139238 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.139650 kubelet[2136]: W0813 01:07:26.139249 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.139650 kubelet[2136]: E0813 01:07:26.139260 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.161842 env[1307]: time="2025-08-13T01:07:26.161767179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blm7h,Uid:7c243cae-c801-4313-b3ce-f935fa1cd8ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\"" Aug 13 01:07:26.237744 kubelet[2136]: E0813 01:07:26.237689 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.237744 kubelet[2136]: W0813 01:07:26.237720 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.237744 kubelet[2136]: E0813 01:07:26.237747 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.238188 kubelet[2136]: E0813 01:07:26.238151 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.238253 kubelet[2136]: W0813 01:07:26.238188 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.238253 kubelet[2136]: E0813 01:07:26.238226 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.238501 kubelet[2136]: E0813 01:07:26.238475 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.238501 kubelet[2136]: W0813 01:07:26.238483 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.238501 kubelet[2136]: E0813 01:07:26.238495 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.238878 kubelet[2136]: E0813 01:07:26.238817 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.238878 kubelet[2136]: W0813 01:07:26.238855 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.238878 kubelet[2136]: E0813 01:07:26.238879 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.239181 kubelet[2136]: E0813 01:07:26.239153 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.239181 kubelet[2136]: W0813 01:07:26.239174 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.239297 kubelet[2136]: E0813 01:07:26.239245 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.239768 kubelet[2136]: E0813 01:07:26.239498 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.239768 kubelet[2136]: W0813 01:07:26.239518 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.239768 kubelet[2136]: E0813 01:07:26.239618 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.240419 kubelet[2136]: E0813 01:07:26.240388 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.240419 kubelet[2136]: W0813 01:07:26.240402 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.240602 kubelet[2136]: E0813 01:07:26.240546 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.240738 kubelet[2136]: E0813 01:07:26.240722 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.240738 kubelet[2136]: W0813 01:07:26.240736 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.240874 kubelet[2136]: E0813 01:07:26.240833 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.240972 kubelet[2136]: E0813 01:07:26.240957 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.240972 kubelet[2136]: W0813 01:07:26.240969 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241130 kubelet[2136]: E0813 01:07:26.241032 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.241201 kubelet[2136]: E0813 01:07:26.241135 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.241201 kubelet[2136]: W0813 01:07:26.241142 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241201 kubelet[2136]: E0813 01:07:26.241195 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.241367 kubelet[2136]: E0813 01:07:26.241282 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.241367 kubelet[2136]: W0813 01:07:26.241289 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241367 kubelet[2136]: E0813 01:07:26.241342 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.241549 kubelet[2136]: E0813 01:07:26.241460 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.241549 kubelet[2136]: W0813 01:07:26.241469 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241549 kubelet[2136]: E0813 01:07:26.241482 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.241784 kubelet[2136]: E0813 01:07:26.241631 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.241784 kubelet[2136]: W0813 01:07:26.241638 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241784 kubelet[2136]: E0813 01:07:26.241648 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.241956 kubelet[2136]: E0813 01:07:26.241868 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.241956 kubelet[2136]: W0813 01:07:26.241876 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.241956 kubelet[2136]: E0813 01:07:26.241887 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.242124 kubelet[2136]: E0813 01:07:26.242014 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.242124 kubelet[2136]: W0813 01:07:26.242021 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.242124 kubelet[2136]: E0813 01:07:26.242031 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.242296 kubelet[2136]: E0813 01:07:26.242166 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.242296 kubelet[2136]: W0813 01:07:26.242173 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.242296 kubelet[2136]: E0813 01:07:26.242180 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.242457 kubelet[2136]: E0813 01:07:26.242315 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.242457 kubelet[2136]: W0813 01:07:26.242321 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.242457 kubelet[2136]: E0813 01:07:26.242375 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.242683 kubelet[2136]: E0813 01:07:26.242508 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.242683 kubelet[2136]: W0813 01:07:26.242519 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.242683 kubelet[2136]: E0813 01:07:26.242603 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.242847 kubelet[2136]: E0813 01:07:26.242733 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.242847 kubelet[2136]: W0813 01:07:26.242740 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.242847 kubelet[2136]: E0813 01:07:26.242794 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.243034 kubelet[2136]: E0813 01:07:26.242867 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.243034 kubelet[2136]: W0813 01:07:26.242873 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.243034 kubelet[2136]: E0813 01:07:26.242932 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.243034 kubelet[2136]: E0813 01:07:26.243025 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.243034 kubelet[2136]: W0813 01:07:26.243032 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.243264 kubelet[2136]: E0813 01:07:26.243044 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.243264 kubelet[2136]: E0813 01:07:26.243249 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.243264 kubelet[2136]: W0813 01:07:26.243259 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.243382 kubelet[2136]: E0813 01:07:26.243271 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.243632 kubelet[2136]: E0813 01:07:26.243616 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.243632 kubelet[2136]: W0813 01:07:26.243629 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.243710 kubelet[2136]: E0813 01:07:26.243641 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.243943 kubelet[2136]: E0813 01:07:26.243905 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.243943 kubelet[2136]: W0813 01:07:26.243918 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.243943 kubelet[2136]: E0813 01:07:26.243929 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.244090 kubelet[2136]: E0813 01:07:26.244075 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.244090 kubelet[2136]: W0813 01:07:26.244086 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.244170 kubelet[2136]: E0813 01:07:26.244094 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.252924 kubelet[2136]: E0813 01:07:26.252881 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:26.252924 kubelet[2136]: W0813 01:07:26.252907 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:26.252924 kubelet[2136]: E0813 01:07:26.252932 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:26.368000 audit[2693]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:26.368000 audit[2693]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb4de4c80 a2=0 a3=7ffeb4de4c6c items=0 ppid=2285 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:26.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:26.374000 audit[2693]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:26.374000 audit[2693]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb4de4c80 a2=0 a3=0 items=0 ppid=2285 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:26.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:27.201875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180504361.mount: Deactivated successfully. Aug 13 01:07:27.456768 kubelet[2136]: E0813 01:07:27.456463 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:28.090487 env[1307]: time="2025-08-13T01:07:28.090425032Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:28.092916 env[1307]: time="2025-08-13T01:07:28.092861200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:28.094807 env[1307]: time="2025-08-13T01:07:28.094782938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:28.096271 env[1307]: time="2025-08-13T01:07:28.096236084Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:28.096757 env[1307]: time="2025-08-13T01:07:28.096722705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:07:28.097911 env[1307]: time="2025-08-13T01:07:28.097870124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:07:28.110979 env[1307]: time="2025-08-13T01:07:28.110921213Z" level=info msg="CreateContainer within sandbox \"c6cd1cdc2de609302ea81d79faac0c4879bf0e509aaa28ba4df1e737e924326c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:07:28.125563 env[1307]: time="2025-08-13T01:07:28.125497932Z" level=info msg="CreateContainer within sandbox \"c6cd1cdc2de609302ea81d79faac0c4879bf0e509aaa28ba4df1e737e924326c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c02309f35ec897dba22912e58fc532da5997822e08a5bbe19b4881654e9d7a1\"" Aug 13 01:07:28.126234 env[1307]: time="2025-08-13T01:07:28.126189537Z" level=info msg="StartContainer for \"4c02309f35ec897dba22912e58fc532da5997822e08a5bbe19b4881654e9d7a1\"" Aug 13 01:07:29.181827 env[1307]: time="2025-08-13T01:07:29.181757263Z" level=info msg="StartContainer for \"4c02309f35ec897dba22912e58fc532da5997822e08a5bbe19b4881654e9d7a1\" returns successfully" Aug 13 01:07:29.504211 kubelet[2136]: E0813 01:07:29.504068 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:30.186453 kubelet[2136]: E0813 01:07:30.186418 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:30.264264 kubelet[2136]: E0813 01:07:30.264204 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.264264 kubelet[2136]: W0813 01:07:30.264243 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.264264 kubelet[2136]: E0813 01:07:30.264272 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.264528 kubelet[2136]: E0813 01:07:30.264500 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.264528 kubelet[2136]: W0813 01:07:30.264510 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.264528 kubelet[2136]: E0813 01:07:30.264520 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.264731 kubelet[2136]: E0813 01:07:30.264704 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.264731 kubelet[2136]: W0813 01:07:30.264718 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.264731 kubelet[2136]: E0813 01:07:30.264728 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.264916 kubelet[2136]: E0813 01:07:30.264893 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.264916 kubelet[2136]: W0813 01:07:30.264906 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265003 kubelet[2136]: E0813 01:07:30.264916 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.265128 kubelet[2136]: E0813 01:07:30.265112 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.265128 kubelet[2136]: W0813 01:07:30.265126 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265187 kubelet[2136]: E0813 01:07:30.265137 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.265327 kubelet[2136]: E0813 01:07:30.265299 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.265327 kubelet[2136]: W0813 01:07:30.265318 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265389 kubelet[2136]: E0813 01:07:30.265328 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.265496 kubelet[2136]: E0813 01:07:30.265482 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.265525 kubelet[2136]: W0813 01:07:30.265494 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265525 kubelet[2136]: E0813 01:07:30.265504 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.265691 kubelet[2136]: E0813 01:07:30.265677 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.265724 kubelet[2136]: W0813 01:07:30.265690 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265724 kubelet[2136]: E0813 01:07:30.265701 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.265888 kubelet[2136]: E0813 01:07:30.265872 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.265888 kubelet[2136]: W0813 01:07:30.265883 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.265985 kubelet[2136]: E0813 01:07:30.265893 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.266077 kubelet[2136]: E0813 01:07:30.266062 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.266077 kubelet[2136]: W0813 01:07:30.266075 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.266130 kubelet[2136]: E0813 01:07:30.266084 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.266255 kubelet[2136]: E0813 01:07:30.266241 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.266285 kubelet[2136]: W0813 01:07:30.266254 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.266285 kubelet[2136]: E0813 01:07:30.266265 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.266465 kubelet[2136]: E0813 01:07:30.266451 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.266491 kubelet[2136]: W0813 01:07:30.266464 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.266491 kubelet[2136]: E0813 01:07:30.266475 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.266667 kubelet[2136]: E0813 01:07:30.266651 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.266667 kubelet[2136]: W0813 01:07:30.266664 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.266739 kubelet[2136]: E0813 01:07:30.266674 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.266858 kubelet[2136]: E0813 01:07:30.266839 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.266858 kubelet[2136]: W0813 01:07:30.266852 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.266936 kubelet[2136]: E0813 01:07:30.266862 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.267098 kubelet[2136]: E0813 01:07:30.267078 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.267098 kubelet[2136]: W0813 01:07:30.267092 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.267197 kubelet[2136]: E0813 01:07:30.267104 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.269534 kubelet[2136]: E0813 01:07:30.269500 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.269534 kubelet[2136]: W0813 01:07:30.269523 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.269650 kubelet[2136]: E0813 01:07:30.269547 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.269981 kubelet[2136]: E0813 01:07:30.269877 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.269981 kubelet[2136]: W0813 01:07:30.269918 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.269981 kubelet[2136]: E0813 01:07:30.269969 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.270306 kubelet[2136]: E0813 01:07:30.270281 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.270306 kubelet[2136]: W0813 01:07:30.270303 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.270397 kubelet[2136]: E0813 01:07:30.270342 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.270624 kubelet[2136]: E0813 01:07:30.270571 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.270624 kubelet[2136]: W0813 01:07:30.270641 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.270913 kubelet[2136]: E0813 01:07:30.270656 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.270913 kubelet[2136]: E0813 01:07:30.270804 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.270913 kubelet[2136]: W0813 01:07:30.270813 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.270913 kubelet[2136]: E0813 01:07:30.270829 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.271099 kubelet[2136]: E0813 01:07:30.271025 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.271099 kubelet[2136]: W0813 01:07:30.271033 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.271099 kubelet[2136]: E0813 01:07:30.271048 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.271535 kubelet[2136]: E0813 01:07:30.271510 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.271535 kubelet[2136]: W0813 01:07:30.271528 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.271674 kubelet[2136]: E0813 01:07:30.271549 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.271851 kubelet[2136]: E0813 01:07:30.271829 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.271851 kubelet[2136]: W0813 01:07:30.271843 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.272013 kubelet[2136]: E0813 01:07:30.271874 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.272054 kubelet[2136]: E0813 01:07:30.272023 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.272054 kubelet[2136]: W0813 01:07:30.272032 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.272107 kubelet[2136]: E0813 01:07:30.272088 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.272260 kubelet[2136]: E0813 01:07:30.272242 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.272260 kubelet[2136]: W0813 01:07:30.272255 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.272367 kubelet[2136]: E0813 01:07:30.272272 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.272506 kubelet[2136]: E0813 01:07:30.272488 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.272506 kubelet[2136]: W0813 01:07:30.272501 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.272620 kubelet[2136]: E0813 01:07:30.272517 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.272765 kubelet[2136]: E0813 01:07:30.272744 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.272765 kubelet[2136]: W0813 01:07:30.272758 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.272855 kubelet[2136]: E0813 01:07:30.272775 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.273026 kubelet[2136]: E0813 01:07:30.273004 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.273081 kubelet[2136]: W0813 01:07:30.273025 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.273081 kubelet[2136]: E0813 01:07:30.273061 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.273421 kubelet[2136]: E0813 01:07:30.273402 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.273421 kubelet[2136]: W0813 01:07:30.273419 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.273497 kubelet[2136]: E0813 01:07:30.273441 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.273901 kubelet[2136]: E0813 01:07:30.273881 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.273901 kubelet[2136]: W0813 01:07:30.273896 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.273998 kubelet[2136]: E0813 01:07:30.273951 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.274148 kubelet[2136]: E0813 01:07:30.274126 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.274148 kubelet[2136]: W0813 01:07:30.274145 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.274217 kubelet[2136]: E0813 01:07:30.274162 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.274446 kubelet[2136]: E0813 01:07:30.274415 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.274446 kubelet[2136]: W0813 01:07:30.274432 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.274446 kubelet[2136]: E0813 01:07:30.274445 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.274952 kubelet[2136]: E0813 01:07:30.274931 2136 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:30.274952 kubelet[2136]: W0813 01:07:30.274948 2136 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:30.275039 kubelet[2136]: E0813 01:07:30.274961 2136 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:30.698463 env[1307]: time="2025-08-13T01:07:30.698394904Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:30.700382 env[1307]: time="2025-08-13T01:07:30.700327559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:30.701969 env[1307]: time="2025-08-13T01:07:30.701840481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:30.703169 env[1307]: time="2025-08-13T01:07:30.703137438Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:30.703616 env[1307]: time="2025-08-13T01:07:30.703568746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:07:30.705459 env[1307]: time="2025-08-13T01:07:30.705423677Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:07:30.720713 env[1307]: time="2025-08-13T01:07:30.720665764Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d\"" Aug 13 01:07:30.721802 env[1307]: time="2025-08-13T01:07:30.721506342Z" level=info msg="StartContainer for \"afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d\"" Aug 13 01:07:30.799249 env[1307]: time="2025-08-13T01:07:30.799159173Z" level=info msg="StartContainer for \"afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d\" returns successfully" Aug 13 01:07:30.847349 env[1307]: time="2025-08-13T01:07:30.847270852Z" level=info msg="shim disconnected" id=afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d Aug 13 01:07:30.847349 env[1307]: time="2025-08-13T01:07:30.847333263Z" level=warning msg="cleaning up after shim disconnected" id=afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d namespace=k8s.io Aug 13 01:07:30.847349 env[1307]: time="2025-08-13T01:07:30.847341811Z" level=info msg="cleaning up dead shim" Aug 13 01:07:30.867480 env[1307]: time="2025-08-13T01:07:30.867411012Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:07:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2820 runtime=io.containerd.runc.v2\n" Aug 13 01:07:31.189491 kubelet[2136]: I0813 01:07:31.189454 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:31.189967 kubelet[2136]: E0813 01:07:31.189723 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:31.190355 env[1307]: time="2025-08-13T01:07:31.190307571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:07:31.205563 kubelet[2136]: I0813 01:07:31.205494 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bdbd956bf-28bb8" podStartSLOduration=3.891045282 podStartE2EDuration="6.205473744s" podCreationTimestamp="2025-08-13 01:07:25 +0000 UTC" firstStartedPulling="2025-08-13 01:07:25.783165106 +0000 UTC m=+16.428295065" lastFinishedPulling="2025-08-13 01:07:28.097593568 +0000 UTC m=+18.742723527" observedRunningTime="2025-08-13 01:07:30.196928881 +0000 UTC m=+20.842058840" watchObservedRunningTime="2025-08-13 01:07:31.205473744 +0000 UTC m=+21.850603703" Aug 13 01:07:31.456756 kubelet[2136]: E0813 01:07:31.456632 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:31.713937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afecbf878fb25be6da7c78a8887276cd1d615ad83bfd49ae6c3654fb0f3bef8d-rootfs.mount: Deactivated successfully. Aug 13 01:07:32.511563 kubelet[2136]: I0813 01:07:32.511524 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:32.512072 kubelet[2136]: E0813 01:07:32.512009 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:32.532000 audit[2844]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:32.535205 kernel: kauditd_printk_skb: 19 callbacks suppressed Aug 13 01:07:32.535272 kernel: audit: type=1325 audit(1755047252.532:290): table=filter:97 family=2 entries=21 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:32.532000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe73dbfff0 a2=0 a3=7ffe73dbffdc items=0 ppid=2285 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:32.543757 kernel: audit: type=1300 audit(1755047252.532:290): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe73dbfff0 a2=0 a3=7ffe73dbffdc items=0 ppid=2285 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:32.543900 kernel: audit: type=1327 audit(1755047252.532:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:32.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:32.552000 audit[2844]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:32.552000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe73dbfff0 a2=0 a3=7ffe73dbffdc items=0 ppid=2285 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:32.569663 kernel: audit: type=1325 audit(1755047252.552:291): table=nat:98 family=2 entries=19 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:32.569733 kernel: audit: type=1300 audit(1755047252.552:291): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe73dbfff0 a2=0 a3=7ffe73dbffdc items=0 ppid=2285 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:32.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:32.572887 kernel: audit: type=1327 audit(1755047252.552:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:33.192681 kubelet[2136]: E0813 01:07:33.192627 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:33.456504 kubelet[2136]: E0813 01:07:33.456059 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:34.752574 env[1307]: time="2025-08-13T01:07:34.752510527Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:34.754454 env[1307]: time="2025-08-13T01:07:34.754408546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:34.755961 env[1307]: time="2025-08-13T01:07:34.755890770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:34.757157 env[1307]: time="2025-08-13T01:07:34.757131711Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:34.757559 env[1307]: time="2025-08-13T01:07:34.757533728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:07:34.759753 env[1307]: time="2025-08-13T01:07:34.759706388Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:07:34.771232 env[1307]: time="2025-08-13T01:07:34.771183315Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7\"" Aug 13 01:07:34.771600 env[1307]: time="2025-08-13T01:07:34.771556862Z" level=info msg="StartContainer for \"935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7\"" Aug 13 01:07:35.456350 kubelet[2136]: E0813 01:07:35.456269 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:35.528818 env[1307]: time="2025-08-13T01:07:35.528747764Z" level=info msg="StartContainer for \"935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7\" returns successfully" Aug 13 01:07:36.446870 env[1307]: time="2025-08-13T01:07:36.446771403Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:07:36.463395 kubelet[2136]: I0813 01:07:36.463348 2136 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:07:36.463980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7-rootfs.mount: Deactivated successfully. Aug 13 01:07:36.468443 env[1307]: time="2025-08-13T01:07:36.468309860Z" level=info msg="shim disconnected" id=935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7 Aug 13 01:07:36.468443 env[1307]: time="2025-08-13T01:07:36.468360826Z" level=warning msg="cleaning up after shim disconnected" id=935012dc72f4fe80a2e1cf4d014b24f1e3765f82d7f27f958f88aa64ab95bcf7 namespace=k8s.io Aug 13 01:07:36.468443 env[1307]: time="2025-08-13T01:07:36.468370235Z" level=info msg="cleaning up dead shim" Aug 13 01:07:36.475690 env[1307]: time="2025-08-13T01:07:36.475575187Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:07:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2894 runtime=io.containerd.runc.v2\n" Aug 13 01:07:36.515692 kubelet[2136]: I0813 01:07:36.515635 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33eda-a5e8-41ab-8360-f83220e743eb-config\") pod \"goldmane-58fd7646b9-b5ght\" (UID: \"16c33eda-a5e8-41ab-8360-f83220e743eb\") " pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.515692 kubelet[2136]: I0813 01:07:36.515685 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b3501aa-75e5-451f-8e19-819e941f33bc-calico-apiserver-certs\") pod \"calico-apiserver-66678997c4-jk2r2\" (UID: \"7b3501aa-75e5-451f-8e19-819e941f33bc\") " pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" Aug 13 01:07:36.515692 kubelet[2136]: I0813 01:07:36.515706 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b672ef7-dbd1-4d90-9a79-75019f71379f-config-volume\") pod \"coredns-7c65d6cfc9-tbrxc\" (UID: \"8b672ef7-dbd1-4d90-9a79-75019f71379f\") " pod="kube-system/coredns-7c65d6cfc9-tbrxc" Aug 13 01:07:36.515930 kubelet[2136]: I0813 01:07:36.515722 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b5f5aa8-77fb-4a21-9305-7c26692f1342-config-volume\") pod \"coredns-7c65d6cfc9-xl6pl\" (UID: \"7b5f5aa8-77fb-4a21-9305-7c26692f1342\") " pod="kube-system/coredns-7c65d6cfc9-xl6pl" Aug 13 01:07:36.515930 kubelet[2136]: I0813 01:07:36.515738 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbmz\" (UniqueName: \"kubernetes.io/projected/16c33eda-a5e8-41ab-8360-f83220e743eb-kube-api-access-rpbmz\") pod \"goldmane-58fd7646b9-b5ght\" (UID: \"16c33eda-a5e8-41ab-8360-f83220e743eb\") " pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.515930 kubelet[2136]: I0813 01:07:36.515752 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqfs\" (UniqueName: \"kubernetes.io/projected/7b3501aa-75e5-451f-8e19-819e941f33bc-kube-api-access-ljqfs\") pod \"calico-apiserver-66678997c4-jk2r2\" (UID: \"7b3501aa-75e5-451f-8e19-819e941f33bc\") " pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" Aug 13 01:07:36.515930 kubelet[2136]: I0813 01:07:36.515767 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/16c33eda-a5e8-41ab-8360-f83220e743eb-goldmane-key-pair\") pod \"goldmane-58fd7646b9-b5ght\" (UID: \"16c33eda-a5e8-41ab-8360-f83220e743eb\") " pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.515930 kubelet[2136]: I0813 01:07:36.515781 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-backend-key-pair\") pod \"whisker-c9c7549f4-nlhf8\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " pod="calico-system/whisker-c9c7549f4-nlhf8" Aug 13 01:07:36.516055 kubelet[2136]: I0813 01:07:36.515794 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl9pn\" (UniqueName: \"kubernetes.io/projected/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-kube-api-access-jl9pn\") pod \"whisker-c9c7549f4-nlhf8\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " pod="calico-system/whisker-c9c7549f4-nlhf8" Aug 13 01:07:36.516055 kubelet[2136]: I0813 01:07:36.515812 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d02c55-4060-4d42-8d96-57faedfa9ddb-tigera-ca-bundle\") pod \"calico-kube-controllers-975f76598-2gqft\" (UID: \"43d02c55-4060-4d42-8d96-57faedfa9ddb\") " pod="calico-system/calico-kube-controllers-975f76598-2gqft" Aug 13 01:07:36.516055 kubelet[2136]: I0813 01:07:36.515826 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4z5s\" (UniqueName: \"kubernetes.io/projected/43d02c55-4060-4d42-8d96-57faedfa9ddb-kube-api-access-w4z5s\") pod \"calico-kube-controllers-975f76598-2gqft\" (UID: \"43d02c55-4060-4d42-8d96-57faedfa9ddb\") " pod="calico-system/calico-kube-controllers-975f76598-2gqft" Aug 13 01:07:36.516055 kubelet[2136]: I0813 01:07:36.515842 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c33eda-a5e8-41ab-8360-f83220e743eb-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-b5ght\" (UID: \"16c33eda-a5e8-41ab-8360-f83220e743eb\") " pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.516055 kubelet[2136]: I0813 01:07:36.515863 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6svj2\" (UniqueName: \"kubernetes.io/projected/8b672ef7-dbd1-4d90-9a79-75019f71379f-kube-api-access-6svj2\") pod \"coredns-7c65d6cfc9-tbrxc\" (UID: \"8b672ef7-dbd1-4d90-9a79-75019f71379f\") " pod="kube-system/coredns-7c65d6cfc9-tbrxc" Aug 13 01:07:36.516189 kubelet[2136]: I0813 01:07:36.515877 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndbx\" (UniqueName: \"kubernetes.io/projected/1d6cb667-b4a4-4c92-a22d-5b802942ec42-kube-api-access-pndbx\") pod \"calico-apiserver-66678997c4-cjt94\" (UID: \"1d6cb667-b4a4-4c92-a22d-5b802942ec42\") " pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" Aug 13 01:07:36.516189 kubelet[2136]: I0813 01:07:36.515897 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-ca-bundle\") pod \"whisker-c9c7549f4-nlhf8\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " pod="calico-system/whisker-c9c7549f4-nlhf8" Aug 13 01:07:36.516189 kubelet[2136]: I0813 01:07:36.515911 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td82d\" (UniqueName: \"kubernetes.io/projected/7b5f5aa8-77fb-4a21-9305-7c26692f1342-kube-api-access-td82d\") pod \"coredns-7c65d6cfc9-xl6pl\" (UID: \"7b5f5aa8-77fb-4a21-9305-7c26692f1342\") " pod="kube-system/coredns-7c65d6cfc9-xl6pl" Aug 13 01:07:36.516189 kubelet[2136]: I0813 01:07:36.515928 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d6cb667-b4a4-4c92-a22d-5b802942ec42-calico-apiserver-certs\") pod \"calico-apiserver-66678997c4-cjt94\" (UID: \"1d6cb667-b4a4-4c92-a22d-5b802942ec42\") " pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" Aug 13 01:07:36.541619 env[1307]: time="2025-08-13T01:07:36.541560294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:07:36.798842 env[1307]: time="2025-08-13T01:07:36.798700430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-jk2r2,Uid:7b3501aa-75e5-451f-8e19-819e941f33bc,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:07:36.800033 kubelet[2136]: E0813 01:07:36.799986 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:36.801026 env[1307]: time="2025-08-13T01:07:36.800983457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-cjt94,Uid:1d6cb667-b4a4-4c92-a22d-5b802942ec42,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:07:36.801167 env[1307]: time="2025-08-13T01:07:36.801128869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xl6pl,Uid:7b5f5aa8-77fb-4a21-9305-7c26692f1342,Namespace:kube-system,Attempt:0,}" Aug 13 01:07:36.804661 kubelet[2136]: E0813 01:07:36.804393 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:36.804814 env[1307]: time="2025-08-13T01:07:36.804758767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbrxc,Uid:8b672ef7-dbd1-4d90-9a79-75019f71379f,Namespace:kube-system,Attempt:0,}" Aug 13 01:07:36.805077 env[1307]: time="2025-08-13T01:07:36.805044338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b5ght,Uid:16c33eda-a5e8-41ab-8360-f83220e743eb,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:36.812646 env[1307]: time="2025-08-13T01:07:36.812271135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975f76598-2gqft,Uid:43d02c55-4060-4d42-8d96-57faedfa9ddb,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:36.816117 env[1307]: time="2025-08-13T01:07:36.816081657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c9c7549f4-nlhf8,Uid:f1f28159-1b5d-4a2e-b8c8-27af428f9df3,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:36.957508 env[1307]: time="2025-08-13T01:07:36.957431116Z" level=error msg="Failed to destroy network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.958209 env[1307]: time="2025-08-13T01:07:36.958180718Z" level=error msg="encountered an error cleaning up failed sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.958358 env[1307]: time="2025-08-13T01:07:36.958322662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b5ght,Uid:16c33eda-a5e8-41ab-8360-f83220e743eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.958990 kubelet[2136]: E0813 01:07:36.958934 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.960012 kubelet[2136]: E0813 01:07:36.959988 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.960089 kubelet[2136]: E0813 01:07:36.960016 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-b5ght" Aug 13 01:07:36.960089 kubelet[2136]: E0813 01:07:36.960066 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-b5ght_calico-system(16c33eda-a5e8-41ab-8360-f83220e743eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-b5ght_calico-system(16c33eda-a5e8-41ab-8360-f83220e743eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-b5ght" podUID="16c33eda-a5e8-41ab-8360-f83220e743eb" Aug 13 01:07:36.983340 env[1307]: time="2025-08-13T01:07:36.983299210Z" level=error msg="Failed to destroy network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.984017 env[1307]: time="2025-08-13T01:07:36.983986764Z" level=error msg="encountered an error cleaning up failed sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.984143 env[1307]: time="2025-08-13T01:07:36.984111472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-jk2r2,Uid:7b3501aa-75e5-451f-8e19-819e941f33bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.984641 kubelet[2136]: E0813 01:07:36.984569 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.984724 kubelet[2136]: E0813 01:07:36.984660 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" Aug 13 01:07:36.984724 kubelet[2136]: E0813 01:07:36.984680 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" Aug 13 01:07:36.984785 kubelet[2136]: E0813 01:07:36.984733 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66678997c4-jk2r2_calico-apiserver(7b3501aa-75e5-451f-8e19-819e941f33bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66678997c4-jk2r2_calico-apiserver(7b3501aa-75e5-451f-8e19-819e941f33bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" podUID="7b3501aa-75e5-451f-8e19-819e941f33bc" Aug 13 01:07:36.988723 env[1307]: time="2025-08-13T01:07:36.988655333Z" level=error msg="Failed to destroy network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.989044 env[1307]: time="2025-08-13T01:07:36.989013855Z" level=error msg="encountered an error cleaning up failed sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.989121 env[1307]: time="2025-08-13T01:07:36.989060952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-cjt94,Uid:1d6cb667-b4a4-4c92-a22d-5b802942ec42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.989312 kubelet[2136]: E0813 01:07:36.989277 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:36.989384 kubelet[2136]: E0813 01:07:36.989344 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" Aug 13 01:07:36.989384 kubelet[2136]: E0813 01:07:36.989367 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" Aug 13 01:07:36.989445 kubelet[2136]: E0813 01:07:36.989414 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66678997c4-cjt94_calico-apiserver(1d6cb667-b4a4-4c92-a22d-5b802942ec42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66678997c4-cjt94_calico-apiserver(1d6cb667-b4a4-4c92-a22d-5b802942ec42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" podUID="1d6cb667-b4a4-4c92-a22d-5b802942ec42" Aug 13 01:07:37.006818 env[1307]: time="2025-08-13T01:07:37.006740967Z" level=error msg="Failed to destroy network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.007188 env[1307]: time="2025-08-13T01:07:37.007143989Z" level=error msg="encountered an error cleaning up failed sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.007245 env[1307]: time="2025-08-13T01:07:37.007205397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbrxc,Uid:8b672ef7-dbd1-4d90-9a79-75019f71379f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.007419 kubelet[2136]: E0813 01:07:37.007384 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.007496 kubelet[2136]: E0813 01:07:37.007436 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tbrxc" Aug 13 01:07:37.007496 kubelet[2136]: E0813 01:07:37.007455 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tbrxc" Aug 13 01:07:37.007553 kubelet[2136]: E0813 01:07:37.007489 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tbrxc_kube-system(8b672ef7-dbd1-4d90-9a79-75019f71379f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tbrxc_kube-system(8b672ef7-dbd1-4d90-9a79-75019f71379f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tbrxc" podUID="8b672ef7-dbd1-4d90-9a79-75019f71379f" Aug 13 01:07:37.011141 env[1307]: time="2025-08-13T01:07:37.011081738Z" level=error msg="Failed to destroy network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.011757 env[1307]: time="2025-08-13T01:07:37.011701800Z" level=error msg="encountered an error cleaning up failed sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.011897 env[1307]: time="2025-08-13T01:07:37.011865709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xl6pl,Uid:7b5f5aa8-77fb-4a21-9305-7c26692f1342,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.012178 kubelet[2136]: E0813 01:07:37.012115 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.012178 kubelet[2136]: E0813 01:07:37.012177 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xl6pl" Aug 13 01:07:37.012390 kubelet[2136]: E0813 01:07:37.012192 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xl6pl" Aug 13 01:07:37.012390 kubelet[2136]: E0813 01:07:37.012227 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xl6pl_kube-system(7b5f5aa8-77fb-4a21-9305-7c26692f1342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xl6pl_kube-system(7b5f5aa8-77fb-4a21-9305-7c26692f1342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xl6pl" podUID="7b5f5aa8-77fb-4a21-9305-7c26692f1342" Aug 13 01:07:37.014698 env[1307]: time="2025-08-13T01:07:37.014623920Z" level=error msg="Failed to destroy network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.015095 env[1307]: time="2025-08-13T01:07:37.015052285Z" level=error msg="encountered an error cleaning up failed sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.015230 env[1307]: time="2025-08-13T01:07:37.015197234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975f76598-2gqft,Uid:43d02c55-4060-4d42-8d96-57faedfa9ddb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.016028 kubelet[2136]: E0813 01:07:37.015529 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.016028 kubelet[2136]: E0813 01:07:37.015624 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975f76598-2gqft" Aug 13 01:07:37.016028 kubelet[2136]: E0813 01:07:37.015653 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-975f76598-2gqft" Aug 13 01:07:37.016192 kubelet[2136]: E0813 01:07:37.015703 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-975f76598-2gqft_calico-system(43d02c55-4060-4d42-8d96-57faedfa9ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-975f76598-2gqft_calico-system(43d02c55-4060-4d42-8d96-57faedfa9ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975f76598-2gqft" podUID="43d02c55-4060-4d42-8d96-57faedfa9ddb" Aug 13 01:07:37.019166 env[1307]: time="2025-08-13T01:07:37.019080751Z" level=error msg="Failed to destroy network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.019538 env[1307]: time="2025-08-13T01:07:37.019502442Z" level=error msg="encountered an error cleaning up failed sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.019653 env[1307]: time="2025-08-13T01:07:37.019553838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c9c7549f4-nlhf8,Uid:f1f28159-1b5d-4a2e-b8c8-27af428f9df3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.019795 kubelet[2136]: E0813 01:07:37.019762 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.019872 kubelet[2136]: E0813 01:07:37.019801 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c9c7549f4-nlhf8" Aug 13 01:07:37.019872 kubelet[2136]: E0813 01:07:37.019815 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c9c7549f4-nlhf8" Aug 13 01:07:37.019872 kubelet[2136]: E0813 01:07:37.019845 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c9c7549f4-nlhf8_calico-system(f1f28159-1b5d-4a2e-b8c8-27af428f9df3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c9c7549f4-nlhf8_calico-system(f1f28159-1b5d-4a2e-b8c8-27af428f9df3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c9c7549f4-nlhf8" podUID="f1f28159-1b5d-4a2e-b8c8-27af428f9df3" Aug 13 01:07:37.468614 env[1307]: time="2025-08-13T01:07:37.465262641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgm6z,Uid:2333ecfa-adb6-4791-8fd0-6a082b51d429,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:37.516435 env[1307]: time="2025-08-13T01:07:37.516356200Z" level=error msg="Failed to destroy network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.516746 env[1307]: time="2025-08-13T01:07:37.516712626Z" level=error msg="encountered an error cleaning up failed sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.516787 env[1307]: time="2025-08-13T01:07:37.516765526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgm6z,Uid:2333ecfa-adb6-4791-8fd0-6a082b51d429,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.517063 kubelet[2136]: E0813 01:07:37.516999 2136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.517063 kubelet[2136]: E0813 01:07:37.517070 2136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:37.517503 kubelet[2136]: E0813 01:07:37.517092 2136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgm6z" Aug 13 01:07:37.517503 kubelet[2136]: E0813 01:07:37.517150 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bgm6z_calico-system(2333ecfa-adb6-4791-8fd0-6a082b51d429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bgm6z_calico-system(2333ecfa-adb6-4791-8fd0-6a082b51d429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:37.518508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d-shm.mount: Deactivated successfully. Aug 13 01:07:37.538787 kubelet[2136]: I0813 01:07:37.538749 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:37.539562 env[1307]: time="2025-08-13T01:07:37.539506657Z" level=info msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" Aug 13 01:07:37.540070 kubelet[2136]: I0813 01:07:37.539993 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:37.540671 env[1307]: time="2025-08-13T01:07:37.540638165Z" level=info msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" Aug 13 01:07:37.543414 kubelet[2136]: I0813 01:07:37.543383 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:37.543965 env[1307]: time="2025-08-13T01:07:37.543941242Z" level=info msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" Aug 13 01:07:37.545617 kubelet[2136]: I0813 01:07:37.545162 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:37.545898 env[1307]: time="2025-08-13T01:07:37.545861601Z" level=info msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" Aug 13 01:07:37.547264 kubelet[2136]: I0813 01:07:37.546699 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:37.547481 env[1307]: time="2025-08-13T01:07:37.547454453Z" level=info msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" Aug 13 01:07:37.548698 kubelet[2136]: I0813 01:07:37.548664 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:37.549306 env[1307]: time="2025-08-13T01:07:37.549281208Z" level=info msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" Aug 13 01:07:37.550746 kubelet[2136]: I0813 01:07:37.550432 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:37.551032 env[1307]: time="2025-08-13T01:07:37.550999670Z" level=info msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" Aug 13 01:07:37.552288 kubelet[2136]: I0813 01:07:37.551837 2136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:37.552753 env[1307]: time="2025-08-13T01:07:37.552733283Z" level=info msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" Aug 13 01:07:37.582042 env[1307]: time="2025-08-13T01:07:37.581972674Z" level=error msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" failed" error="failed to destroy network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.582721 kubelet[2136]: E0813 01:07:37.582499 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:37.582721 kubelet[2136]: E0813 01:07:37.582567 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911"} Aug 13 01:07:37.582721 kubelet[2136]: E0813 01:07:37.582645 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b5f5aa8-77fb-4a21-9305-7c26692f1342\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.582721 kubelet[2136]: E0813 01:07:37.582679 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b5f5aa8-77fb-4a21-9305-7c26692f1342\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xl6pl" podUID="7b5f5aa8-77fb-4a21-9305-7c26692f1342" Aug 13 01:07:37.583546 env[1307]: time="2025-08-13T01:07:37.583475881Z" level=error msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" failed" error="failed to destroy network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.584035 kubelet[2136]: E0813 01:07:37.583937 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:37.584035 kubelet[2136]: E0813 01:07:37.583963 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3"} Aug 13 01:07:37.584035 kubelet[2136]: E0813 01:07:37.583985 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b3501aa-75e5-451f-8e19-819e941f33bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.584035 kubelet[2136]: E0813 01:07:37.584000 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b3501aa-75e5-451f-8e19-819e941f33bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" podUID="7b3501aa-75e5-451f-8e19-819e941f33bc" Aug 13 01:07:37.588553 env[1307]: time="2025-08-13T01:07:37.588498121Z" level=error msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" failed" error="failed to destroy network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.588874 kubelet[2136]: E0813 01:07:37.588754 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:37.588874 kubelet[2136]: E0813 01:07:37.588799 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b"} Aug 13 01:07:37.588874 kubelet[2136]: E0813 01:07:37.588830 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16c33eda-a5e8-41ab-8360-f83220e743eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.588874 kubelet[2136]: E0813 01:07:37.588847 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16c33eda-a5e8-41ab-8360-f83220e743eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-b5ght" podUID="16c33eda-a5e8-41ab-8360-f83220e743eb" Aug 13 01:07:37.608794 env[1307]: time="2025-08-13T01:07:37.608724905Z" level=error msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" failed" error="failed to destroy network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.608794 env[1307]: time="2025-08-13T01:07:37.608736139Z" level=error msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" failed" error="failed to destroy network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.608993 env[1307]: time="2025-08-13T01:07:37.608855967Z" level=error msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" failed" error="failed to destroy network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.609351 kubelet[2136]: E0813 01:07:37.609037 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:37.609351 kubelet[2136]: E0813 01:07:37.609068 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:37.609351 kubelet[2136]: E0813 01:07:37.609100 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b"} Aug 13 01:07:37.609351 kubelet[2136]: E0813 01:07:37.609116 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:37.609351 kubelet[2136]: E0813 01:07:37.609178 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb"} Aug 13 01:07:37.609530 kubelet[2136]: E0813 01:07:37.609218 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d6cb667-b4a4-4c92-a22d-5b802942ec42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.609530 kubelet[2136]: E0813 01:07:37.609140 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b672ef7-dbd1-4d90-9a79-75019f71379f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.609530 kubelet[2136]: E0813 01:07:37.609265 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d6cb667-b4a4-4c92-a22d-5b802942ec42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" podUID="1d6cb667-b4a4-4c92-a22d-5b802942ec42" Aug 13 01:07:37.609530 kubelet[2136]: E0813 01:07:37.609134 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d"} Aug 13 01:07:37.609727 kubelet[2136]: E0813 01:07:37.609282 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b672ef7-dbd1-4d90-9a79-75019f71379f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tbrxc" podUID="8b672ef7-dbd1-4d90-9a79-75019f71379f" Aug 13 01:07:37.609727 kubelet[2136]: E0813 01:07:37.609304 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2333ecfa-adb6-4791-8fd0-6a082b51d429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.609727 kubelet[2136]: E0813 01:07:37.609321 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2333ecfa-adb6-4791-8fd0-6a082b51d429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgm6z" podUID="2333ecfa-adb6-4791-8fd0-6a082b51d429" Aug 13 01:07:37.618469 env[1307]: time="2025-08-13T01:07:37.618391945Z" level=error msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" failed" error="failed to destroy network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.619052 kubelet[2136]: E0813 01:07:37.619011 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:37.619139 kubelet[2136]: E0813 01:07:37.619066 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880"} Aug 13 01:07:37.619139 kubelet[2136]: E0813 01:07:37.619103 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43d02c55-4060-4d42-8d96-57faedfa9ddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.619139 kubelet[2136]: E0813 01:07:37.619126 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43d02c55-4060-4d42-8d96-57faedfa9ddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-975f76598-2gqft" podUID="43d02c55-4060-4d42-8d96-57faedfa9ddb" Aug 13 01:07:37.627732 env[1307]: time="2025-08-13T01:07:37.627670322Z" level=error msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" failed" error="failed to destroy network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:37.627951 kubelet[2136]: E0813 01:07:37.627905 2136 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:37.627951 kubelet[2136]: E0813 01:07:37.627942 2136 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b"} Aug 13 01:07:37.628147 kubelet[2136]: E0813 01:07:37.627971 2136 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:07:37.628147 kubelet[2136]: E0813 01:07:37.627990 2136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c9c7549f4-nlhf8" podUID="f1f28159-1b5d-4a2e-b8c8-27af428f9df3" Aug 13 01:07:42.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.139:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:42.327432 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:51866.service. Aug 13 01:07:42.334681 kernel: audit: type=1130 audit(1755047262.325:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.139:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:42.381197 kernel: audit: type=1101 audit(1755047262.365:293): pid=3339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.381302 kernel: audit: type=1103 audit(1755047262.369:294): pid=3339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.381365 kernel: audit: type=1006 audit(1755047262.369:295): pid=3339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Aug 13 01:07:42.381385 kernel: audit: type=1300 audit(1755047262.369:295): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe82abdcf0 a2=3 a3=0 items=0 ppid=1 pid=3339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:42.365000 audit[3339]: USER_ACCT pid=3339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.369000 audit[3339]: CRED_ACQ pid=3339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.369000 audit[3339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe82abdcf0 a2=3 a3=0 items=0 ppid=1 pid=3339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:42.372252 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:42.381898 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:42.369000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:42.383607 kernel: audit: type=1327 audit(1755047262.369:295): proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:42.384399 systemd-logind[1290]: New session 8 of user core. Aug 13 01:07:42.385353 systemd[1]: Started session-8.scope. Aug 13 01:07:42.397509 kernel: audit: type=1105 audit(1755047262.388:296): pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.397576 kernel: audit: type=1103 audit(1755047262.389:297): pid=3342 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.388000 audit[3339]: USER_START pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.389000 audit[3342]: CRED_ACQ pid=3342 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.499709 sshd[3339]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:42.499000 audit[3339]: USER_END pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.503000 audit[3339]: CRED_DISP pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.505609 kernel: audit: type=1106 audit(1755047262.499:298): pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.505653 kernel: audit: type=1104 audit(1755047262.503:299): pid=3339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:42.506523 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:51866.service: Deactivated successfully. Aug 13 01:07:42.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.139:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:42.507842 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:07:42.508202 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:07:42.508921 systemd-logind[1290]: Removed session 8. Aug 13 01:07:42.801484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262811468.mount: Deactivated successfully. Aug 13 01:07:45.171899 env[1307]: time="2025-08-13T01:07:45.171839050Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:45.175340 env[1307]: time="2025-08-13T01:07:45.175265824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:45.176921 env[1307]: time="2025-08-13T01:07:45.176892263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:45.178276 env[1307]: time="2025-08-13T01:07:45.178237218Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:45.178615 env[1307]: time="2025-08-13T01:07:45.178560068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:07:45.186643 env[1307]: time="2025-08-13T01:07:45.186563743Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:07:45.201666 env[1307]: time="2025-08-13T01:07:45.201614553Z" level=info msg="CreateContainer within sandbox \"bfc29cfd0ff4f892c90bd579dfea127e44773b051b33311b7b3a6979847e9118\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a014bcee269e48ae936e845adb4a3d68458c62bd2018dbcd58efa8877174c716\"" Aug 13 01:07:45.202050 env[1307]: time="2025-08-13T01:07:45.202022996Z" level=info msg="StartContainer for \"a014bcee269e48ae936e845adb4a3d68458c62bd2018dbcd58efa8877174c716\"" Aug 13 01:07:45.313314 env[1307]: time="2025-08-13T01:07:45.313246439Z" level=info msg="StartContainer for \"a014bcee269e48ae936e845adb4a3d68458c62bd2018dbcd58efa8877174c716\" returns successfully" Aug 13 01:07:45.346510 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:07:45.346658 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:07:45.426255 env[1307]: time="2025-08-13T01:07:45.426099738Z" level=info msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.538 [INFO][3423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.538 [INFO][3423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" iface="eth0" netns="/var/run/netns/cni-d0335de6-2fa5-e6bf-942a-f69aa83bf502" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.539 [INFO][3423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" iface="eth0" netns="/var/run/netns/cni-d0335de6-2fa5-e6bf-942a-f69aa83bf502" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.539 [INFO][3423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" iface="eth0" netns="/var/run/netns/cni-d0335de6-2fa5-e6bf-942a-f69aa83bf502" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.539 [INFO][3423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.539 [INFO][3423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.590 [INFO][3432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.591 [INFO][3432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.591 [INFO][3432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.598 [WARNING][3432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.598 [INFO][3432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.599 [INFO][3432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:45.603372 env[1307]: 2025-08-13 01:07:45.601 [INFO][3423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:07:45.603856 env[1307]: time="2025-08-13T01:07:45.603502636Z" level=info msg="TearDown network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" successfully" Aug 13 01:07:45.603856 env[1307]: time="2025-08-13T01:07:45.603535333Z" level=info msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" returns successfully" Aug 13 01:07:45.768138 kubelet[2136]: I0813 01:07:45.767623 2136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-backend-key-pair\") pod \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " Aug 13 01:07:45.768138 kubelet[2136]: I0813 01:07:45.767669 2136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl9pn\" (UniqueName: \"kubernetes.io/projected/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-kube-api-access-jl9pn\") pod \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " Aug 13 01:07:45.768138 kubelet[2136]: I0813 01:07:45.767692 2136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-ca-bundle\") pod \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\" (UID: \"f1f28159-1b5d-4a2e-b8c8-27af428f9df3\") " Aug 13 01:07:45.768138 kubelet[2136]: I0813 01:07:45.768050 2136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1f28159-1b5d-4a2e-b8c8-27af428f9df3" (UID: "f1f28159-1b5d-4a2e-b8c8-27af428f9df3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:07:45.771001 kubelet[2136]: I0813 01:07:45.770957 2136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1f28159-1b5d-4a2e-b8c8-27af428f9df3" (UID: "f1f28159-1b5d-4a2e-b8c8-27af428f9df3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:07:45.771150 kubelet[2136]: I0813 01:07:45.771084 2136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-kube-api-access-jl9pn" (OuterVolumeSpecName: "kube-api-access-jl9pn") pod "f1f28159-1b5d-4a2e-b8c8-27af428f9df3" (UID: "f1f28159-1b5d-4a2e-b8c8-27af428f9df3"). InnerVolumeSpecName "kube-api-access-jl9pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:07:45.868679 kubelet[2136]: I0813 01:07:45.868622 2136 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 01:07:45.868679 kubelet[2136]: I0813 01:07:45.868665 2136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl9pn\" (UniqueName: \"kubernetes.io/projected/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-kube-api-access-jl9pn\") on node \"localhost\" DevicePath \"\"" Aug 13 01:07:45.868679 kubelet[2136]: I0813 01:07:45.868677 2136 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f28159-1b5d-4a2e-b8c8-27af428f9df3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 01:07:46.184715 systemd[1]: run-netns-cni\x2dd0335de6\x2d2fa5\x2de6bf\x2d942a\x2df69aa83bf502.mount: Deactivated successfully. Aug 13 01:07:46.184876 systemd[1]: var-lib-kubelet-pods-f1f28159\x2d1b5d\x2d4a2e\x2db8c8\x2d27af428f9df3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djl9pn.mount: Deactivated successfully. Aug 13 01:07:46.184980 systemd[1]: var-lib-kubelet-pods-f1f28159\x2d1b5d\x2d4a2e\x2db8c8\x2d27af428f9df3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:07:46.566950 kubelet[2136]: I0813 01:07:46.566835 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:46.579348 kubelet[2136]: I0813 01:07:46.579112 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-blm7h" podStartSLOduration=2.562748516 podStartE2EDuration="21.579097684s" podCreationTimestamp="2025-08-13 01:07:25 +0000 UTC" firstStartedPulling="2025-08-13 01:07:26.163012145 +0000 UTC m=+16.808142094" lastFinishedPulling="2025-08-13 01:07:45.179361303 +0000 UTC m=+35.824491262" observedRunningTime="2025-08-13 01:07:45.580219343 +0000 UTC m=+36.225349292" watchObservedRunningTime="2025-08-13 01:07:46.579097684 +0000 UTC m=+37.224227643" Aug 13 01:07:46.711000 audit[3501]: AVC avc: denied { write } for pid=3501 comm="tee" name="fd" dev="proc" ino=25840 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.711000 audit[3501]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce6dcc7d6 a2=241 a3=1b6 items=1 ppid=3468 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.711000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 01:07:46.711000 audit: PATH item=0 name="/dev/fd/63" inode=25835 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.718000 audit[3525]: AVC avc: denied { write } for pid=3525 comm="tee" name="fd" dev="proc" ino=25077 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.718000 audit[3525]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdc0077e5 a2=241 a3=1b6 items=1 ppid=3472 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.718000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 01:07:46.718000 audit: PATH item=0 name="/dev/fd/63" inode=26665 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.723000 audit[3522]: AVC avc: denied { write } for pid=3522 comm="tee" name="fd" dev="proc" ino=25847 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.723000 audit[3522]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff9f8117d5 a2=241 a3=1b6 items=1 ppid=3466 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.723000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 01:07:46.723000 audit: PATH item=0 name="/dev/fd/63" inode=23887 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.739000 audit[3534]: AVC avc: denied { write } for pid=3534 comm="tee" name="fd" dev="proc" ino=25857 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.740000 audit[3530]: AVC avc: denied { write } for pid=3530 comm="tee" name="fd" dev="proc" ino=25860 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.739000 audit[3534]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffbf37e7e6 a2=241 a3=1b6 items=1 ppid=3477 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.739000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 01:07:46.739000 audit: PATH item=0 name="/dev/fd/63" inode=25850 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.741000 audit[3536]: AVC avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=25082 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.741000 audit[3536]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe125ea7e7 a2=241 a3=1b6 items=1 ppid=3462 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.741000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 01:07:46.741000 audit: PATH item=0 name="/dev/fd/63" inode=25853 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.742000 audit[3519]: AVC avc: denied { write } for pid=3519 comm="tee" name="fd" dev="proc" ino=23892 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:07:46.742000 audit[3519]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6eb797e5 a2=241 a3=1b6 items=1 ppid=3476 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.742000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 01:07:46.742000 audit: PATH item=0 name="/dev/fd/63" inode=25843 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.740000 audit[3530]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedf6b77e5 a2=241 a3=1b6 items=1 ppid=3465 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.740000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 01:07:46.740000 audit: PATH item=0 name="/dev/fd/63" inode=25844 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:07:46.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:07:46.773751 kubelet[2136]: I0813 01:07:46.773695 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng58r\" (UniqueName: \"kubernetes.io/projected/f778cf61-ecc5-488e-930f-9e3220d17c01-kube-api-access-ng58r\") pod \"whisker-6444db5f5d-tjp8w\" (UID: \"f778cf61-ecc5-488e-930f-9e3220d17c01\") " pod="calico-system/whisker-6444db5f5d-tjp8w" Aug 13 01:07:46.774344 kubelet[2136]: I0813 01:07:46.774326 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f778cf61-ecc5-488e-930f-9e3220d17c01-whisker-backend-key-pair\") pod \"whisker-6444db5f5d-tjp8w\" (UID: \"f778cf61-ecc5-488e-930f-9e3220d17c01\") " pod="calico-system/whisker-6444db5f5d-tjp8w" Aug 13 01:07:46.774445 kubelet[2136]: I0813 01:07:46.774426 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f778cf61-ecc5-488e-930f-9e3220d17c01-whisker-ca-bundle\") pod \"whisker-6444db5f5d-tjp8w\" (UID: \"f778cf61-ecc5-488e-930f-9e3220d17c01\") " pod="calico-system/whisker-6444db5f5d-tjp8w" Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit: BPF prog-id=10 op=LOAD Aug 13 01:07:46.872000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedb8d5730 a2=98 a3=1fffffffffffffff items=0 ppid=3469 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.872000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:07:46.872000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit: BPF prog-id=11 op=LOAD Aug 13 01:07:46.872000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedb8d5610 a2=94 a3=3 items=0 ppid=3469 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.872000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:07:46.872000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.872000 audit: BPF prog-id=12 op=LOAD Aug 13 01:07:46.872000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedb8d5650 a2=94 a3=7ffedb8d5830 items=0 ppid=3469 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.872000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:07:46.873000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:07:46.873000 audit[3576]: AVC avc: denied { perfmon } for pid=3576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffedb8d5720 a2=50 a3=a000000085 items=0 ppid=3469 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.873000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.873000 audit: BPF prog-id=13 op=LOAD Aug 13 01:07:46.873000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf95d8290 a2=98 a3=3 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.874000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit: BPF prog-id=14 op=LOAD Aug 13 01:07:46.874000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf95d8080 a2=94 a3=54428f items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.874000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.874000 audit: BPF prog-id=15 op=LOAD Aug 13 01:07:46.874000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf95d80b0 a2=94 a3=2 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.875000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:07:46.909621 env[1307]: time="2025-08-13T01:07:46.909562593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6444db5f5d-tjp8w,Uid:f778cf61-ecc5-488e-930f-9e3220d17c01,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.987000 audit: BPF prog-id=16 op=LOAD Aug 13 01:07:46.987000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf95d7f70 a2=94 a3=1 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.987000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.988000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:07:46.988000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.988000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcf95d8040 a2=50 a3=7ffcf95d8120 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.988000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.997000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.997000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d7f80 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.997000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.998000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.998000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf95d7fb0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.998000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.998000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.998000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf95d7ec0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.998000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.998000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.998000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d7fd0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.998000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.998000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.998000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d7fb0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.998000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.999000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.999000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d7fa0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.999000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.999000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d7fd0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.999000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.999000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf95d7fb0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:46.999000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:46.999000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf95d7fd0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.000000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.000000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcf95d7fa0 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.000000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.000000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf95d8010 a2=28 a3=0 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.000000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.000000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcf95d7dc0 a2=50 a3=1 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit: BPF prog-id=17 op=LOAD Aug 13 01:07:47.001000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf95d7dc0 a2=94 a3=5 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.001000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:07:47.001000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.001000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcf95d7e70 a2=50 a3=1 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcf95d7f90 a2=4 a3=38 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.002000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.002000 audit[3577]: AVC avc: denied { confidentiality } for pid=3577 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.002000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf95d7fe0 a2=94 a3=6 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.002000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.003000 audit[3577]: AVC avc: denied { confidentiality } for pid=3577 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.003000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf95d7790 a2=94 a3=88 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { perfmon } for pid=3577 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { bpf } for pid=3577 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.004000 audit[3577]: AVC avc: denied { confidentiality } for pid=3577 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.004000 audit[3577]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcf95d7790 a2=94 a3=88 items=0 ppid=3469 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.012000 audit: BPF prog-id=18 op=LOAD Aug 13 01:07:47.012000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9cd2e210 a2=98 a3=1999999999999999 items=0 ppid=3469 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.012000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:07:47.014000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit: BPF prog-id=19 op=LOAD Aug 13 01:07:47.014000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9cd2e0f0 a2=94 a3=ffff items=0 ppid=3469 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.014000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:07:47.014000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { perfmon } for pid=3606 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit[3606]: AVC avc: denied { bpf } for pid=3606 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.014000 audit: BPF prog-id=20 op=LOAD Aug 13 01:07:47.014000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9cd2e130 a2=94 a3=7fff9cd2e310 items=0 ppid=3469 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.014000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:07:47.015000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:07:47.039037 systemd-networkd[1079]: cali62d5c0b27d8: Link UP Aug 13 01:07:47.043065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:07:47.043154 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali62d5c0b27d8: link becomes ready Aug 13 01:07:47.043352 systemd-networkd[1079]: cali62d5c0b27d8: Gained carrier Aug 13 01:07:47.070428 systemd-networkd[1079]: vxlan.calico: Link UP Aug 13 01:07:47.070436 systemd-networkd[1079]: vxlan.calico: Gained carrier Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.084000 audit: BPF prog-id=21 op=LOAD Aug 13 01:07:47.084000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff0bc8c20 a2=98 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.085000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.085000 audit: BPF prog-id=22 op=LOAD Aug 13 01:07:47.085000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff0bc8a30 a2=94 a3=54428f items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.085000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.086000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit: BPF prog-id=23 op=LOAD Aug 13 01:07:47.086000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff0bc8a60 a2=94 a3=2 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.086000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc8930 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.086000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.086000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff0bc8960 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff0bc8870 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc8980 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc8960 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc8950 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc8980 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.087000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.087000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff0bc8960 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.087000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff0bc8980 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff0bc8950 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff0bc89c0 a2=28 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.088000 audit: BPF prog-id=24 op=LOAD Aug 13 01:07:47.088000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff0bc8830 a2=94 a3=0 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.088000 audit: BPF prog-id=24 op=UNLOAD Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffff0bc8820 a2=50 a3=2800 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffff0bc8820 a2=50 a3=2800 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit: BPF prog-id=25 op=LOAD Aug 13 01:07:47.089000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff0bc8040 a2=94 a3=2 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.089000 audit: BPF prog-id=25 op=UNLOAD Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.089000 audit: BPF prog-id=26 op=LOAD Aug 13 01:07:47.089000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff0bc8140 a2=94 a3=30 items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.093000 audit: BPF prog-id=27 op=LOAD Aug 13 01:07:47.093000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaa46baf0 a2=98 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.094000 audit: BPF prog-id=27 op=UNLOAD Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.094000 audit: BPF prog-id=28 op=LOAD Aug 13 01:07:47.094000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeaa46b8e0 a2=94 a3=54428f items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.095000 audit: BPF prog-id=28 op=UNLOAD Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.095000 audit: BPF prog-id=29 op=LOAD Aug 13 01:07:47.095000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeaa46b910 a2=94 a3=2 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.096000 audit: BPF prog-id=29 op=UNLOAD Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.959 [INFO][3580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6444db5f5d--tjp8w-eth0 whisker-6444db5f5d- calico-system f778cf61-ecc5-488e-930f-9e3220d17c01 993 0 2025-08-13 01:07:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6444db5f5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6444db5f5d-tjp8w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali62d5c0b27d8 [] [] }} ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.960 [INFO][3580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.984 [INFO][3595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" HandleID="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Workload="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.984 [INFO][3595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" HandleID="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Workload="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6444db5f5d-tjp8w", "timestamp":"2025-08-13 01:07:46.98435988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.984 [INFO][3595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.984 [INFO][3595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.984 [INFO][3595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:46.991 [INFO][3595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.001 [INFO][3595] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.005 [INFO][3595] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.007 [INFO][3595] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.009 [INFO][3595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.009 [INFO][3595] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.010 [INFO][3595] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.014 [INFO][3595] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.023 [INFO][3595] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.024 [INFO][3595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" host="localhost" Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.024 [INFO][3595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.107098 env[1307]: 2025-08-13 01:07:47.024 [INFO][3595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" HandleID="k8s-pod-network.867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Workload="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.027 [INFO][3580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6444db5f5d--tjp8w-eth0", GenerateName:"whisker-6444db5f5d-", Namespace:"calico-system", SelfLink:"", UID:"f778cf61-ecc5-488e-930f-9e3220d17c01", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6444db5f5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6444db5f5d-tjp8w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62d5c0b27d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.027 [INFO][3580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.027 [INFO][3580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62d5c0b27d8 ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.039 [INFO][3580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.043 [INFO][3580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6444db5f5d--tjp8w-eth0", GenerateName:"whisker-6444db5f5d-", Namespace:"calico-system", SelfLink:"", UID:"f778cf61-ecc5-488e-930f-9e3220d17c01", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6444db5f5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a", Pod:"whisker-6444db5f5d-tjp8w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62d5c0b27d8", MAC:"2e:04:c5:21:67:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:47.107900 env[1307]: 2025-08-13 01:07:47.102 [INFO][3580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a" Namespace="calico-system" Pod="whisker-6444db5f5d-tjp8w" WorkloadEndpoint="localhost-k8s-whisker--6444db5f5d--tjp8w-eth0" Aug 13 01:07:47.131618 env[1307]: time="2025-08-13T01:07:47.129755055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:47.131618 env[1307]: time="2025-08-13T01:07:47.129805187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:47.131618 env[1307]: time="2025-08-13T01:07:47.129816650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:47.131971 env[1307]: time="2025-08-13T01:07:47.131936387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a pid=3655 runtime=io.containerd.runc.v2 Aug 13 01:07:47.157295 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:47.186730 env[1307]: time="2025-08-13T01:07:47.186678887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6444db5f5d-tjp8w,Uid:f778cf61-ecc5-488e-930f-9e3220d17c01,Namespace:calico-system,Attempt:0,} returns sandbox id \"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a\"" Aug 13 01:07:47.189320 env[1307]: time="2025-08-13T01:07:47.189280644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.210000 audit: BPF prog-id=30 op=LOAD Aug 13 01:07:47.210000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeaa46b7d0 a2=94 a3=1 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.211000 audit: BPF prog-id=30 op=UNLOAD Aug 13 01:07:47.211000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.211000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffeaa46b8a0 a2=50 a3=7ffeaa46b980 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.218000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.218000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b7e0 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.218000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeaa46b810 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeaa46b720 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b830 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b810 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b800 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b830 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeaa46b810 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeaa46b830 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeaa46b800 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeaa46b870 a2=28 a3=0 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffeaa46b620 a2=50 a3=1 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit: BPF prog-id=31 op=LOAD Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeaa46b620 a2=94 a3=5 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit: BPF prog-id=31 op=UNLOAD Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffeaa46b6d0 a2=50 a3=1 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffeaa46b7f0 a2=4 a3=38 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { confidentiality } for pid=3643 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeaa46b840 a2=94 a3=6 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { confidentiality } for pid=3643 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeaa46aff0 a2=94 a3=88 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { perfmon } for pid=3643 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.219000 audit[3643]: AVC avc: denied { confidentiality } for pid=3643 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:07:47.219000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeaa46aff0 a2=94 a3=88 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.220000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.220000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeaa46ca20 a2=10 a3=208 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.220000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.220000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeaa46c8c0 a2=10 a3=3 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.220000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.220000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeaa46c860 a2=10 a3=3 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.220000 audit[3643]: AVC avc: denied { bpf } for pid=3643 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:07:47.220000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeaa46c860 a2=10 a3=7 items=0 ppid=3469 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.220000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:07:47.228000 audit: BPF prog-id=26 op=UNLOAD Aug 13 01:07:47.272000 audit[3712]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:47.272000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe08ab32b0 a2=0 a3=7ffe08ab329c items=0 ppid=3469 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:47.280000 audit[3716]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:47.280000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe1e626710 a2=0 a3=7ffe1e6266fc items=0 ppid=3469 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:47.281000 audit[3711]: NETFILTER_CFG table=raw:101 family=2 entries=21 op=nft_register_chain pid=3711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:47.281000 audit[3711]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffee090b940 a2=0 a3=7ffee090b92c items=0 ppid=3469 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.281000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:47.282000 audit[3713]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=3713 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:47.282000 audit[3713]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffce4fa35b0 a2=0 a3=7ffce4fa359c items=0 ppid=3469 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.282000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:47.304000 audit[3724]: NETFILTER_CFG table=filter:103 family=2 entries=59 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:47.304000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffdb2bf2870 a2=0 a3=7ffdb2bf285c items=0 ppid=3469 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.304000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:47.463317 kubelet[2136]: I0813 01:07:47.458957 2136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f28159-1b5d-4a2e-b8c8-27af428f9df3" path="/var/lib/kubelet/pods/f1f28159-1b5d-4a2e-b8c8-27af428f9df3/volumes" Aug 13 01:07:47.503414 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:51874.service. Aug 13 01:07:47.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.139:22-10.0.0.1:51874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:47.508572 kernel: kauditd_printk_skb: 561 callbacks suppressed Aug 13 01:07:47.508645 kernel: audit: type=1130 audit(1755047267.501:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.139:22-10.0.0.1:51874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:47.543000 audit[3729]: USER_ACCT pid=3729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.545162 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 51874 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:47.547000 audit[3729]: CRED_ACQ pid=3729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.549515 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:47.552669 kernel: audit: type=1101 audit(1755047267.543:412): pid=3729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.552827 kernel: audit: type=1103 audit(1755047267.547:413): pid=3729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.552866 kernel: audit: type=1006 audit(1755047267.547:414): pid=3729 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Aug 13 01:07:47.553557 systemd-logind[1290]: New session 9 of user core. Aug 13 01:07:47.554341 systemd[1]: Started session-9.scope. Aug 13 01:07:47.558662 kernel: audit: type=1300 audit(1755047267.547:414): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdda23dc60 a2=3 a3=0 items=0 ppid=1 pid=3729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.547000 audit[3729]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdda23dc60 a2=3 a3=0 items=0 ppid=1 pid=3729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:47.547000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:47.560360 kernel: audit: type=1327 audit(1755047267.547:414): proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:47.560421 kernel: audit: type=1105 audit(1755047267.557:415): pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.557000 audit[3729]: USER_START pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.564386 kernel: audit: type=1103 audit(1755047267.558:416): pid=3732 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.558000 audit[3732]: CRED_ACQ pid=3732 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.696497 sshd[3729]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:47.696000 audit[3729]: USER_END pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.699334 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:51874.service: Deactivated successfully. Aug 13 01:07:47.700406 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:07:47.700450 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:07:47.701457 systemd-logind[1290]: Removed session 9. Aug 13 01:07:47.696000 audit[3729]: CRED_DISP pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.706329 kernel: audit: type=1106 audit(1755047267.696:417): pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.706387 kernel: audit: type=1104 audit(1755047267.696:418): pid=3729 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:47.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.139:22-10.0.0.1:51874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:48.456921 env[1307]: time="2025-08-13T01:07:48.456812351Z" level=info msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" Aug 13 01:07:48.457720 env[1307]: time="2025-08-13T01:07:48.456893546Z" level=info msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" iface="eth0" netns="/var/run/netns/cni-0d51c6e1-f3fe-7db6-5a60-c63bfd371377" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" iface="eth0" netns="/var/run/netns/cni-0d51c6e1-f3fe-7db6-5a60-c63bfd371377" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" iface="eth0" netns="/var/run/netns/cni-0d51c6e1-f3fe-7db6-5a60-c63bfd371377" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.512 [INFO][3770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.545 [INFO][3786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.545 [INFO][3786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.545 [INFO][3786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.550 [WARNING][3786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.550 [INFO][3786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.555 [INFO][3786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.558859 env[1307]: 2025-08-13 01:07:48.557 [INFO][3770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:07:48.564496 systemd[1]: run-netns-cni\x2d0d51c6e1\x2df3fe\x2d7db6\x2d5a60\x2dc63bfd371377.mount: Deactivated successfully. Aug 13 01:07:48.565653 env[1307]: time="2025-08-13T01:07:48.565609021Z" level=info msg="TearDown network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" successfully" Aug 13 01:07:48.565752 env[1307]: time="2025-08-13T01:07:48.565730548Z" level=info msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" returns successfully" Aug 13 01:07:48.566810 env[1307]: time="2025-08-13T01:07:48.566734271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-cjt94,Uid:1d6cb667-b4a4-4c92-a22d-5b802942ec42,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.519 [INFO][3771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.519 [INFO][3771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" iface="eth0" netns="/var/run/netns/cni-796ff217-e306-eb07-9c05-aff8df1eb512" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.519 [INFO][3771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" iface="eth0" netns="/var/run/netns/cni-796ff217-e306-eb07-9c05-aff8df1eb512" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.520 [INFO][3771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" iface="eth0" netns="/var/run/netns/cni-796ff217-e306-eb07-9c05-aff8df1eb512" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.520 [INFO][3771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.520 [INFO][3771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.555 [INFO][3792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.555 [INFO][3792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.555 [INFO][3792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.565 [WARNING][3792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.565 [INFO][3792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.567 [INFO][3792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.570747 env[1307]: 2025-08-13 01:07:48.568 [INFO][3771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:07:48.573049 env[1307]: time="2025-08-13T01:07:48.570881347Z" level=info msg="TearDown network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" successfully" Aug 13 01:07:48.573049 env[1307]: time="2025-08-13T01:07:48.570919355Z" level=info msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" returns successfully" Aug 13 01:07:48.573049 env[1307]: time="2025-08-13T01:07:48.571455094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-jk2r2,Uid:7b3501aa-75e5-451f-8e19-819e941f33bc,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:07:48.575103 systemd[1]: run-netns-cni\x2d796ff217\x2de306\x2deb07\x2d9c05\x2daff8df1eb512.mount: Deactivated successfully. Aug 13 01:07:48.594752 env[1307]: time="2025-08-13T01:07:48.594701636Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:48.604606 env[1307]: time="2025-08-13T01:07:48.603597143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:48.608738 env[1307]: time="2025-08-13T01:07:48.608710084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:48.611554 env[1307]: time="2025-08-13T01:07:48.611512434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 01:07:48.611896 env[1307]: time="2025-08-13T01:07:48.611872908Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:48.618677 env[1307]: time="2025-08-13T01:07:48.618637360Z" level=info msg="CreateContainer within sandbox \"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 01:07:48.633985 env[1307]: time="2025-08-13T01:07:48.633898666Z" level=info msg="CreateContainer within sandbox \"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f2c47e1fc704d76e6cea50b43ad9ffca73f60cb31a83cf046c1888e0a154ccc\"" Aug 13 01:07:48.635784 env[1307]: time="2025-08-13T01:07:48.635746265Z" level=info msg="StartContainer for \"5f2c47e1fc704d76e6cea50b43ad9ffca73f60cb31a83cf046c1888e0a154ccc\"" Aug 13 01:07:48.724303 systemd-networkd[1079]: calid5d6c2351a1: Link UP Aug 13 01:07:48.727301 systemd-networkd[1079]: calid5d6c2351a1: Gained carrier Aug 13 01:07:48.727800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid5d6c2351a1: link becomes ready Aug 13 01:07:48.728765 env[1307]: time="2025-08-13T01:07:48.728726288Z" level=info msg="StartContainer for \"5f2c47e1fc704d76e6cea50b43ad9ffca73f60cb31a83cf046c1888e0a154ccc\" returns successfully" Aug 13 01:07:48.729689 systemd-networkd[1079]: cali62d5c0b27d8: Gained IPv6LL Aug 13 01:07:48.730663 env[1307]: time="2025-08-13T01:07:48.730644120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.654 [INFO][3808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0 calico-apiserver-66678997c4- calico-apiserver 1d6cb667-b4a4-4c92-a22d-5b802942ec42 1012 0 2025-08-13 01:07:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66678997c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66678997c4-cjt94 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid5d6c2351a1 [] [] }} ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.654 [INFO][3808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.687 [INFO][3851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" HandleID="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.687 [INFO][3851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" HandleID="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66678997c4-cjt94", "timestamp":"2025-08-13 01:07:48.687419215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.687 [INFO][3851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.688 [INFO][3851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.688 [INFO][3851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.694 [INFO][3851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.698 [INFO][3851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.703 [INFO][3851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.705 [INFO][3851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.706 [INFO][3851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.706 [INFO][3851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.708 [INFO][3851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.712 [INFO][3851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.717 [INFO][3851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.717 [INFO][3851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" host="localhost" Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.717 [INFO][3851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.740886 env[1307]: 2025-08-13 01:07:48.717 [INFO][3851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" HandleID="k8s-pod-network.5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.722 [INFO][3808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d6cb667-b4a4-4c92-a22d-5b802942ec42", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66678997c4-cjt94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5d6c2351a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.722 [INFO][3808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.722 [INFO][3808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5d6c2351a1 ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.728 [INFO][3808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.728 [INFO][3808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d6cb667-b4a4-4c92-a22d-5b802942ec42", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd", Pod:"calico-apiserver-66678997c4-cjt94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5d6c2351a1", MAC:"02:19:48:f2:ee:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.741504 env[1307]: 2025-08-13 01:07:48.738 [INFO][3808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-cjt94" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:07:48.756346 env[1307]: time="2025-08-13T01:07:48.756261394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.756528 env[1307]: time="2025-08-13T01:07:48.756320845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.756528 env[1307]: time="2025-08-13T01:07:48.756339262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.756772 env[1307]: time="2025-08-13T01:07:48.756724887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd pid=3905 runtime=io.containerd.runc.v2 Aug 13 01:07:48.755000 audit[3910]: NETFILTER_CFG table=filter:104 family=2 entries=50 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:48.755000 audit[3910]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff0a3495c0 a2=0 a3=7fff0a3495ac items=0 ppid=3469 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:48.755000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:48.776777 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:48.801301 env[1307]: time="2025-08-13T01:07:48.801258413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-cjt94,Uid:1d6cb667-b4a4-4c92-a22d-5b802942ec42,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd\"" Aug 13 01:07:48.825040 systemd-networkd[1079]: calia4ee9051bbb: Link UP Aug 13 01:07:48.825186 systemd-networkd[1079]: calia4ee9051bbb: Gained carrier Aug 13 01:07:48.825801 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia4ee9051bbb: link becomes ready Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.651 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0 calico-apiserver-66678997c4- calico-apiserver 7b3501aa-75e5-451f-8e19-819e941f33bc 1013 0 2025-08-13 01:07:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66678997c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66678997c4-jk2r2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4ee9051bbb [] [] }} ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.651 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.689 [INFO][3845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" HandleID="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.689 [INFO][3845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" HandleID="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66678997c4-jk2r2", "timestamp":"2025-08-13 01:07:48.68931405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.689 [INFO][3845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.717 [INFO][3845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.717 [INFO][3845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.794 [INFO][3845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.799 [INFO][3845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.804 [INFO][3845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.806 [INFO][3845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.808 [INFO][3845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.808 [INFO][3845] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.809 [INFO][3845] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40 Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.813 [INFO][3845] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.819 [INFO][3845] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.819 [INFO][3845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" host="localhost" Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.819 [INFO][3845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.839873 env[1307]: 2025-08-13 01:07:48.819 [INFO][3845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" HandleID="k8s-pod-network.5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.822 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b3501aa-75e5-451f-8e19-819e941f33bc", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66678997c4-jk2r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ee9051bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.822 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.822 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4ee9051bbb ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.825 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.825 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b3501aa-75e5-451f-8e19-819e941f33bc", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40", Pod:"calico-apiserver-66678997c4-jk2r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ee9051bbb", MAC:"6a:d8:70:73:a6:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.840502 env[1307]: 2025-08-13 01:07:48.837 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40" Namespace="calico-apiserver" Pod="calico-apiserver-66678997c4-jk2r2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:07:48.849000 audit[3951]: NETFILTER_CFG table=filter:105 family=2 entries=41 op=nft_register_chain pid=3951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:48.849000 audit[3951]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffc283a6ca0 a2=0 a3=7ffc283a6c8c items=0 ppid=3469 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:48.849000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:48.853142 env[1307]: time="2025-08-13T01:07:48.853051890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.853142 env[1307]: time="2025-08-13T01:07:48.853102703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.853237 env[1307]: time="2025-08-13T01:07:48.853135369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.853420 env[1307]: time="2025-08-13T01:07:48.853371229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40 pid=3955 runtime=io.containerd.runc.v2 Aug 13 01:07:48.873563 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:48.895743 env[1307]: time="2025-08-13T01:07:48.895699801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66678997c4-jk2r2,Uid:7b3501aa-75e5-451f-8e19-819e941f33bc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40\"" Aug 13 01:07:48.985748 systemd-networkd[1079]: vxlan.calico: Gained IPv6LL Aug 13 01:07:49.226768 kubelet[2136]: I0813 01:07:49.226724 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:49.456546 env[1307]: time="2025-08-13T01:07:49.456501441Z" level=info msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" Aug 13 01:07:49.456805 env[1307]: time="2025-08-13T01:07:49.456775548Z" level=info msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" Aug 13 01:07:49.567322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528034397.mount: Deactivated successfully. Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.785 [INFO][4055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.786 [INFO][4055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" iface="eth0" netns="/var/run/netns/cni-d3dff981-3b82-808f-c3cd-7291c83f7969" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.786 [INFO][4055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" iface="eth0" netns="/var/run/netns/cni-d3dff981-3b82-808f-c3cd-7291c83f7969" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.786 [INFO][4055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" iface="eth0" netns="/var/run/netns/cni-d3dff981-3b82-808f-c3cd-7291c83f7969" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.786 [INFO][4055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.786 [INFO][4055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.809 [INFO][4070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.809 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.809 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.814 [WARNING][4070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.814 [INFO][4070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.815 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:49.819078 env[1307]: 2025-08-13 01:07:49.817 [INFO][4055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:07:49.819078 env[1307]: time="2025-08-13T01:07:49.818969866Z" level=info msg="TearDown network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" successfully" Aug 13 01:07:49.819078 env[1307]: time="2025-08-13T01:07:49.819004728Z" level=info msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" returns successfully" Aug 13 01:07:49.819953 env[1307]: time="2025-08-13T01:07:49.819794804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgm6z,Uid:2333ecfa-adb6-4791-8fd0-6a082b51d429,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:49.821738 systemd[1]: run-netns-cni\x2dd3dff981\x2d3b82\x2d808f\x2dc3cd\x2d7291c83f7969.mount: Deactivated successfully. Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.784 [INFO][4054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.784 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" iface="eth0" netns="/var/run/netns/cni-1745cf99-ac2e-ab3c-2dc7-2d5ca48bdc10" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.784 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" iface="eth0" netns="/var/run/netns/cni-1745cf99-ac2e-ab3c-2dc7-2d5ca48bdc10" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.786 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" iface="eth0" netns="/var/run/netns/cni-1745cf99-ac2e-ab3c-2dc7-2d5ca48bdc10" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.786 [INFO][4054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.786 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.811 [INFO][4072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.811 [INFO][4072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.815 [INFO][4072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.827 [WARNING][4072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.827 [INFO][4072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.829 [INFO][4072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:49.833035 env[1307]: 2025-08-13 01:07:49.831 [INFO][4054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:07:49.833707 env[1307]: time="2025-08-13T01:07:49.833173334Z" level=info msg="TearDown network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" successfully" Aug 13 01:07:49.833707 env[1307]: time="2025-08-13T01:07:49.833208926Z" level=info msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" returns successfully" Aug 13 01:07:49.833778 kubelet[2136]: E0813 01:07:49.833623 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:49.834484 env[1307]: time="2025-08-13T01:07:49.834432203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xl6pl,Uid:7b5f5aa8-77fb-4a21-9305-7c26692f1342,Namespace:kube-system,Attempt:1,}" Aug 13 01:07:49.837478 systemd[1]: run-netns-cni\x2d1745cf99\x2dac2e\x2dab3c\x2d2dc7\x2d2d5ca48bdc10.mount: Deactivated successfully. Aug 13 01:07:49.956217 systemd-networkd[1079]: calib68604f2712: Link UP Aug 13 01:07:49.959000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:07:49.959138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib68604f2712: link becomes ready Aug 13 01:07:49.959232 systemd-networkd[1079]: calib68604f2712: Gained carrier Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.888 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bgm6z-eth0 csi-node-driver- calico-system 2333ecfa-adb6-4791-8fd0-6a082b51d429 1040 0 2025-08-13 01:07:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bgm6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib68604f2712 [] [] }} ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.888 [INFO][4087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.915 [INFO][4117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" HandleID="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.915 [INFO][4117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" HandleID="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bgm6z", "timestamp":"2025-08-13 01:07:49.915084609 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.915 [INFO][4117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.915 [INFO][4117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.915 [INFO][4117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.920 [INFO][4117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.924 [INFO][4117] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.928 [INFO][4117] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.929 [INFO][4117] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.931 [INFO][4117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.931 [INFO][4117] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.937 [INFO][4117] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722 Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.941 [INFO][4117] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.946 [INFO][4117] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.946 [INFO][4117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" host="localhost" Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.946 [INFO][4117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:49.971209 env[1307]: 2025-08-13 01:07:49.946 [INFO][4117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" HandleID="k8s-pod-network.01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.954 [INFO][4087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgm6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2333ecfa-adb6-4791-8fd0-6a082b51d429", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bgm6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib68604f2712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.954 [INFO][4087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.954 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib68604f2712 ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.959 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.959 [INFO][4087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgm6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2333ecfa-adb6-4791-8fd0-6a082b51d429", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722", Pod:"csi-node-driver-bgm6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib68604f2712", MAC:"96:de:ef:e3:87:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:49.971845 env[1307]: 2025-08-13 01:07:49.969 [INFO][4087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722" Namespace="calico-system" Pod="csi-node-driver-bgm6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:07:49.978000 audit[4147]: NETFILTER_CFG table=filter:106 family=2 entries=44 op=nft_register_chain pid=4147 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:49.978000 audit[4147]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffe38c44530 a2=0 a3=7ffe38c4451c items=0 ppid=3469 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:49.978000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:49.980553 env[1307]: time="2025-08-13T01:07:49.980212604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:49.980553 env[1307]: time="2025-08-13T01:07:49.980248848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:49.980553 env[1307]: time="2025-08-13T01:07:49.980258107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:49.981120 env[1307]: time="2025-08-13T01:07:49.981052712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722 pid=4149 runtime=io.containerd.runc.v2 Aug 13 01:07:50.000962 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:50.012476 env[1307]: time="2025-08-13T01:07:50.012433740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgm6z,Uid:2333ecfa-adb6-4791-8fd0-6a082b51d429,Namespace:calico-system,Attempt:1,} returns sandbox id \"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722\"" Aug 13 01:07:50.049764 systemd-networkd[1079]: calica8de8ceef5: Link UP Aug 13 01:07:50.051605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calica8de8ceef5: link becomes ready Aug 13 01:07:50.052625 systemd-networkd[1079]: calica8de8ceef5: Gained carrier Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.892 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0 coredns-7c65d6cfc9- kube-system 7b5f5aa8-77fb-4a21-9305-7c26692f1342 1039 0 2025-08-13 01:07:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-xl6pl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica8de8ceef5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.893 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.915 [INFO][4123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" HandleID="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.915 [INFO][4123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" HandleID="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000494500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-xl6pl", "timestamp":"2025-08-13 01:07:49.915572901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.915 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.946 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:49.946 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.022 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.026 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.030 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.032 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.036 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.036 [INFO][4123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.037 [INFO][4123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.040 [INFO][4123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.045 [INFO][4123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.045 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" host="localhost" Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.045 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:50.064395 env[1307]: 2025-08-13 01:07:50.045 [INFO][4123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" HandleID="k8s-pod-network.538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.048 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b5f5aa8-77fb-4a21-9305-7c26692f1342", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-xl6pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica8de8ceef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.048 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.048 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica8de8ceef5 ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.051 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.051 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b5f5aa8-77fb-4a21-9305-7c26692f1342", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b", Pod:"coredns-7c65d6cfc9-xl6pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica8de8ceef5", MAC:"fa:63:5a:1d:10:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:50.065103 env[1307]: 2025-08-13 01:07:50.059 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xl6pl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:07:50.076000 audit[4192]: NETFILTER_CFG table=filter:107 family=2 entries=54 op=nft_register_chain pid=4192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:50.076000 audit[4192]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffdf0b341e0 a2=0 a3=7ffdf0b341cc items=0 ppid=3469 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:50.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:50.081774 env[1307]: time="2025-08-13T01:07:50.081699487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:50.081774 env[1307]: time="2025-08-13T01:07:50.081743927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:50.081908 env[1307]: time="2025-08-13T01:07:50.081761373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:50.081984 env[1307]: time="2025-08-13T01:07:50.081896137Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b pid=4200 runtime=io.containerd.runc.v2 Aug 13 01:07:50.103373 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:50.126304 env[1307]: time="2025-08-13T01:07:50.126255781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xl6pl,Uid:7b5f5aa8-77fb-4a21-9305-7c26692f1342,Namespace:kube-system,Attempt:1,} returns sandbox id \"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b\"" Aug 13 01:07:50.126942 kubelet[2136]: E0813 01:07:50.126914 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:50.129082 env[1307]: time="2025-08-13T01:07:50.129028992Z" level=info msg="CreateContainer within sandbox \"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:07:50.138225 systemd-networkd[1079]: calid5d6c2351a1: Gained IPv6LL Aug 13 01:07:50.155040 env[1307]: time="2025-08-13T01:07:50.154995268Z" level=info msg="CreateContainer within sandbox \"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"884d7d00beb50e190e356c5789654dc27b4a389bea4fd4534996855d642aacb2\"" Aug 13 01:07:50.155497 env[1307]: time="2025-08-13T01:07:50.155458068Z" level=info msg="StartContainer for \"884d7d00beb50e190e356c5789654dc27b4a389bea4fd4534996855d642aacb2\"" Aug 13 01:07:50.196436 env[1307]: time="2025-08-13T01:07:50.196384811Z" level=info msg="StartContainer for \"884d7d00beb50e190e356c5789654dc27b4a389bea4fd4534996855d642aacb2\" returns successfully" Aug 13 01:07:50.457273 env[1307]: time="2025-08-13T01:07:50.457206271Z" level=info msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" Aug 13 01:07:50.457730 systemd-networkd[1079]: calia4ee9051bbb: Gained IPv6LL Aug 13 01:07:50.581887 kubelet[2136]: E0813 01:07:50.581782 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:50.847000 audit[4290]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:50.847000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc24184d00 a2=0 a3=7ffc24184cec items=0 ppid=2285 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:50.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:50.851260 kubelet[2136]: I0813 01:07:50.851121 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xl6pl" podStartSLOduration=36.851096533 podStartE2EDuration="36.851096533s" podCreationTimestamp="2025-08-13 01:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:50.834117907 +0000 UTC m=+41.479247856" watchObservedRunningTime="2025-08-13 01:07:50.851096533 +0000 UTC m=+41.496226482" Aug 13 01:07:50.855000 audit[4290]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:50.855000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc24184d00 a2=0 a3=0 items=0 ppid=2285 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:50.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:50.876000 audit[4299]: NETFILTER_CFG table=filter:110 family=2 entries=17 op=nft_register_rule pid=4299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:50.876000 audit[4299]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd19272d00 a2=0 a3=7ffd19272cec items=0 ppid=2285 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:50.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:50.881000 audit[4299]: NETFILTER_CFG table=nat:111 family=2 entries=35 op=nft_register_chain pid=4299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:50.881000 audit[4299]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd19272d00 a2=0 a3=7ffd19272cec items=0 ppid=2285 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:50.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.854 [INFO][4282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.854 [INFO][4282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" iface="eth0" netns="/var/run/netns/cni-e403be87-faa5-c7c0-5374-41d4e0bb032d" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.855 [INFO][4282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" iface="eth0" netns="/var/run/netns/cni-e403be87-faa5-c7c0-5374-41d4e0bb032d" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.855 [INFO][4282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" iface="eth0" netns="/var/run/netns/cni-e403be87-faa5-c7c0-5374-41d4e0bb032d" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.855 [INFO][4282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.855 [INFO][4282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.897 [INFO][4293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.897 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.897 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.902 [WARNING][4293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.903 [INFO][4293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.904 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:50.907648 env[1307]: 2025-08-13 01:07:50.906 [INFO][4282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:07:50.913561 env[1307]: time="2025-08-13T01:07:50.911238906Z" level=info msg="TearDown network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" successfully" Aug 13 01:07:50.913561 env[1307]: time="2025-08-13T01:07:50.911277364Z" level=info msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" returns successfully" Aug 13 01:07:50.913561 env[1307]: time="2025-08-13T01:07:50.912961643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbrxc,Uid:8b672ef7-dbd1-4d90-9a79-75019f71379f,Namespace:kube-system,Attempt:1,}" Aug 13 01:07:50.913728 kubelet[2136]: E0813 01:07:50.911780 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:50.910650 systemd[1]: run-netns-cni\x2de403be87\x2dfaa5\x2dc7c0\x2d5374\x2d41d4e0bb032d.mount: Deactivated successfully. Aug 13 01:07:51.022185 systemd-networkd[1079]: calib7c124b65d7: Link UP Aug 13 01:07:51.025117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:07:51.025196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib7c124b65d7: link becomes ready Aug 13 01:07:51.025295 systemd-networkd[1079]: calib7c124b65d7: Gained carrier Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.963 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0 coredns-7c65d6cfc9- kube-system 8b672ef7-dbd1-4d90-9a79-75019f71379f 1066 0 2025-08-13 01:07:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tbrxc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7c124b65d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.963 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.989 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" HandleID="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.989 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" HandleID="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tbrxc", "timestamp":"2025-08-13 01:07:50.989122842 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.989 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.989 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.989 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.995 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:50.999 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.003 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.005 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.007 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.007 [INFO][4318] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.008 [INFO][4318] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.012 [INFO][4318] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.017 [INFO][4318] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.017 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" host="localhost" Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.017 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:51.041207 env[1307]: 2025-08-13 01:07:51.017 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" HandleID="k8s-pod-network.a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.020 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b672ef7-dbd1-4d90-9a79-75019f71379f", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tbrxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7c124b65d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.020 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.020 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7c124b65d7 ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.025 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.027 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b672ef7-dbd1-4d90-9a79-75019f71379f", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b", Pod:"coredns-7c65d6cfc9-tbrxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7c124b65d7", MAC:"72:07:74:33:31:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:51.041901 env[1307]: 2025-08-13 01:07:51.037 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tbrxc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:07:51.053000 audit[4334]: NETFILTER_CFG table=filter:112 family=2 entries=54 op=nft_register_chain pid=4334 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:51.053000 audit[4334]: SYSCALL arch=c000003e syscall=46 success=yes exit=25572 a0=3 a1=7ffca9468fe0 a2=0 a3=7ffca9468fcc items=0 ppid=3469 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:51.053000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:51.070744 env[1307]: time="2025-08-13T01:07:51.070659867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:51.070744 env[1307]: time="2025-08-13T01:07:51.070704006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:51.070744 env[1307]: time="2025-08-13T01:07:51.070714788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:51.070953 env[1307]: time="2025-08-13T01:07:51.070889182Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b pid=4342 runtime=io.containerd.runc.v2 Aug 13 01:07:51.091120 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:51.114146 env[1307]: time="2025-08-13T01:07:51.112530595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tbrxc,Uid:8b672ef7-dbd1-4d90-9a79-75019f71379f,Namespace:kube-system,Attempt:1,} returns sandbox id \"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b\"" Aug 13 01:07:51.114762 kubelet[2136]: E0813 01:07:51.114728 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:51.117753 env[1307]: time="2025-08-13T01:07:51.117680763Z" level=info msg="CreateContainer within sandbox \"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:07:51.187816 env[1307]: time="2025-08-13T01:07:51.187755172Z" level=info msg="CreateContainer within sandbox \"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0a2089a98676d08a9b33a01440226ce8262a588ee3872232b5dd80433a98035\"" Aug 13 01:07:51.190743 env[1307]: time="2025-08-13T01:07:51.190675247Z" level=info msg="StartContainer for \"d0a2089a98676d08a9b33a01440226ce8262a588ee3872232b5dd80433a98035\"" Aug 13 01:07:51.280106 env[1307]: time="2025-08-13T01:07:51.280007586Z" level=info msg="StartContainer for \"d0a2089a98676d08a9b33a01440226ce8262a588ee3872232b5dd80433a98035\" returns successfully" Aug 13 01:07:51.459396 env[1307]: time="2025-08-13T01:07:51.459090794Z" level=info msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" Aug 13 01:07:51.482740 systemd-networkd[1079]: calica8de8ceef5: Gained IPv6LL Aug 13 01:07:51.546736 systemd-networkd[1079]: calib68604f2712: Gained IPv6LL Aug 13 01:07:51.566305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758986971.mount: Deactivated successfully. Aug 13 01:07:51.587360 kubelet[2136]: E0813 01:07:51.587319 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:51.587733 kubelet[2136]: E0813 01:07:51.587390 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:51.815034 kubelet[2136]: I0813 01:07:51.814085 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tbrxc" podStartSLOduration=37.814064093 podStartE2EDuration="37.814064093s" podCreationTimestamp="2025-08-13 01:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:07:51.813915411 +0000 UTC m=+42.459045370" watchObservedRunningTime="2025-08-13 01:07:51.814064093 +0000 UTC m=+42.459194052" Aug 13 01:07:51.870000 audit[4441]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:51.870000 audit[4441]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd095f04c0 a2=0 a3=7ffd095f04ac items=0 ppid=2285 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:51.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:51.875000 audit[4441]: NETFILTER_CFG table=nat:114 family=2 entries=44 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:51.875000 audit[4441]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd095f04c0 a2=0 a3=7ffd095f04ac items=0 ppid=2285 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:51.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.584 [INFO][4423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.584 [INFO][4423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" iface="eth0" netns="/var/run/netns/cni-efb14b34-44b1-bab9-678b-a4dc286f0f1d" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.584 [INFO][4423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" iface="eth0" netns="/var/run/netns/cni-efb14b34-44b1-bab9-678b-a4dc286f0f1d" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.585 [INFO][4423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" iface="eth0" netns="/var/run/netns/cni-efb14b34-44b1-bab9-678b-a4dc286f0f1d" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.585 [INFO][4423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.585 [INFO][4423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.605 [INFO][4432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.605 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.605 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.845 [WARNING][4432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:51.845 [INFO][4432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:52.038 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:52.041663 env[1307]: 2025-08-13 01:07:52.039 [INFO][4423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:07:52.042338 env[1307]: time="2025-08-13T01:07:52.041844446Z" level=info msg="TearDown network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" successfully" Aug 13 01:07:52.042338 env[1307]: time="2025-08-13T01:07:52.041882813Z" level=info msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" returns successfully" Aug 13 01:07:52.043283 env[1307]: time="2025-08-13T01:07:52.043250947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975f76598-2gqft,Uid:43d02c55-4060-4d42-8d96-57faedfa9ddb,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:52.044551 systemd[1]: run-netns-cni\x2defb14b34\x2d44b1\x2dbab9\x2d678b\x2da4dc286f0f1d.mount: Deactivated successfully. Aug 13 01:07:52.249745 systemd-networkd[1079]: calib7c124b65d7: Gained IPv6LL Aug 13 01:07:52.457292 env[1307]: time="2025-08-13T01:07:52.457246101Z" level=info msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" Aug 13 01:07:52.589245 kubelet[2136]: E0813 01:07:52.588977 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:52.589760 kubelet[2136]: E0813 01:07:52.589734 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:52.700429 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:34782.service. Aug 13 01:07:52.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.139:22-10.0.0.1:34782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:52.705742 kernel: kauditd_printk_skb: 34 callbacks suppressed Aug 13 01:07:52.705804 kernel: audit: type=1130 audit(1755047272.700:431): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.139:22-10.0.0.1:34782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:52.740000 audit[4467]: USER_ACCT pid=4467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.741726 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 34782 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:52.745000 audit[4467]: CRED_ACQ pid=4467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.746335 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:52.749662 kernel: audit: type=1101 audit(1755047272.740:432): pid=4467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.749719 kernel: audit: type=1103 audit(1755047272.745:433): pid=4467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.749739 kernel: audit: type=1006 audit(1755047272.745:434): pid=4467 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 01:07:52.750143 systemd-logind[1290]: New session 10 of user core. Aug 13 01:07:52.750944 systemd[1]: Started session-10.scope. Aug 13 01:07:52.752380 kernel: audit: type=1300 audit(1755047272.745:434): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6052db60 a2=3 a3=0 items=0 ppid=1 pid=4467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:52.745000 audit[4467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6052db60 a2=3 a3=0 items=0 ppid=1 pid=4467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:52.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:52.758518 kernel: audit: type=1327 audit(1755047272.745:434): proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:52.758576 kernel: audit: type=1105 audit(1755047272.755:435): pid=4467 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.755000 audit[4467]: USER_START pid=4467 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.757000 audit[4470]: CRED_ACQ pid=4470 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:52.765909 kernel: audit: type=1103 audit(1755047272.757:436): pid=4470 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:53.072081 env[1307]: time="2025-08-13T01:07:53.072015518Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.777 [INFO][4459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.777 [INFO][4459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" iface="eth0" netns="/var/run/netns/cni-cacaa99c-7c9e-4d56-9210-c52e9f920b8e" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.778 [INFO][4459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" iface="eth0" netns="/var/run/netns/cni-cacaa99c-7c9e-4d56-9210-c52e9f920b8e" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.778 [INFO][4459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" iface="eth0" netns="/var/run/netns/cni-cacaa99c-7c9e-4d56-9210-c52e9f920b8e" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.778 [INFO][4459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.778 [INFO][4459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.798 [INFO][4472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.798 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:52.798 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:53.061 [WARNING][4472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:53.061 [INFO][4472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:53.068 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:53.073655 env[1307]: 2025-08-13 01:07:53.070 [INFO][4459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:07:53.076000 audit[4490]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:53.078178 systemd[1]: run-netns-cni\x2dcacaa99c\x2d7c9e\x2d4d56\x2d9210\x2dc52e9f920b8e.mount: Deactivated successfully. Aug 13 01:07:53.082639 kernel: audit: type=1325 audit(1755047273.076:437): table=filter:115 family=2 entries=14 op=nft_register_rule pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:53.081708 sshd[4467]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:53.082836 env[1307]: time="2025-08-13T01:07:53.081279534Z" level=info msg="TearDown network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" successfully" Aug 13 01:07:53.082836 env[1307]: time="2025-08-13T01:07:53.081347281Z" level=info msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" returns successfully" Aug 13 01:07:53.083195 env[1307]: time="2025-08-13T01:07:53.083154332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b5ght,Uid:16c33eda-a5e8-41ab-8360-f83220e743eb,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:53.076000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb983ea60 a2=0 a3=7ffcb983ea4c items=0 ppid=2285 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:53.086747 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:34782.service: Deactivated successfully. Aug 13 01:07:53.087927 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:07:53.088455 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:07:53.089686 kernel: audit: type=1300 audit(1755047273.076:437): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb983ea60 a2=0 a3=7ffcb983ea4c items=0 ppid=2285 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:53.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:53.084000 audit[4467]: USER_END pid=4467 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:53.084000 audit[4467]: CRED_DISP pid=4467 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:53.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.139:22-10.0.0.1:34782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:53.089552 systemd-logind[1290]: Removed session 10. Aug 13 01:07:53.094000 audit[4490]: NETFILTER_CFG table=nat:116 family=2 entries=56 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:53.094000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcb983ea60 a2=0 a3=7ffcb983ea4c items=0 ppid=2285 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:53.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:53.162423 env[1307]: time="2025-08-13T01:07:53.162366672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:53.172673 env[1307]: time="2025-08-13T01:07:53.172642027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:53.187406 env[1307]: time="2025-08-13T01:07:53.187358899Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:53.188051 env[1307]: time="2025-08-13T01:07:53.188023635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 01:07:53.190210 env[1307]: time="2025-08-13T01:07:53.190184272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:07:53.190831 env[1307]: time="2025-08-13T01:07:53.190801462Z" level=info msg="CreateContainer within sandbox \"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 01:07:53.211213 env[1307]: time="2025-08-13T01:07:53.211153299Z" level=info msg="CreateContainer within sandbox \"867f69a6d139450520b95e521621ea55c4a5ba259816e8619d8683e0f33f394a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7c944bb3b5a52306104ff7020c6cd7e578f06c31d83bed8c22c077b568aef0db\"" Aug 13 01:07:53.213936 env[1307]: time="2025-08-13T01:07:53.213872948Z" level=info msg="StartContainer for \"7c944bb3b5a52306104ff7020c6cd7e578f06c31d83bed8c22c077b568aef0db\"" Aug 13 01:07:53.355490 env[1307]: time="2025-08-13T01:07:53.355318023Z" level=info msg="StartContainer for \"7c944bb3b5a52306104ff7020c6cd7e578f06c31d83bed8c22c077b568aef0db\" returns successfully" Aug 13 01:07:53.389740 systemd-networkd[1079]: calic57eac8fe8a: Link UP Aug 13 01:07:53.393480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:07:53.393740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic57eac8fe8a: link becomes ready Aug 13 01:07:53.394619 systemd-networkd[1079]: calic57eac8fe8a: Gained carrier Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.209 [INFO][4494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0 calico-kube-controllers-975f76598- calico-system 43d02c55-4060-4d42-8d96-57faedfa9ddb 1080 0 2025-08-13 01:07:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:975f76598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-975f76598-2gqft eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic57eac8fe8a [] [] }} ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.209 [INFO][4494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.286 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" HandleID="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.287 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" HandleID="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-975f76598-2gqft", "timestamp":"2025-08-13 01:07:53.286563527 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.287 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.288 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.288 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.305 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.317 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.327 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.331 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.335 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.337 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.341 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.352 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.380 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.380 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" host="localhost" Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.380 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:53.416382 env[1307]: 2025-08-13 01:07:53.380 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" HandleID="k8s-pod-network.9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.382 [INFO][4494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0", GenerateName:"calico-kube-controllers-975f76598-", Namespace:"calico-system", SelfLink:"", UID:"43d02c55-4060-4d42-8d96-57faedfa9ddb", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975f76598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-975f76598-2gqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic57eac8fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.383 [INFO][4494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.383 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic57eac8fe8a ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.396 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.396 [INFO][4494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0", GenerateName:"calico-kube-controllers-975f76598-", Namespace:"calico-system", SelfLink:"", UID:"43d02c55-4060-4d42-8d96-57faedfa9ddb", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975f76598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a", Pod:"calico-kube-controllers-975f76598-2gqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic57eac8fe8a", MAC:"82:2b:33:7d:1e:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:53.417755 env[1307]: 2025-08-13 01:07:53.410 [INFO][4494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a" Namespace="calico-system" Pod="calico-kube-controllers-975f76598-2gqft" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:07:53.435017 env[1307]: time="2025-08-13T01:07:53.434864279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:53.435203 env[1307]: time="2025-08-13T01:07:53.435025094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:53.435203 env[1307]: time="2025-08-13T01:07:53.435056187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:53.438002 env[1307]: time="2025-08-13T01:07:53.437911171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a pid=4601 runtime=io.containerd.runc.v2 Aug 13 01:07:53.449000 audit[4617]: NETFILTER_CFG table=filter:117 family=2 entries=58 op=nft_register_chain pid=4617 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:53.449000 audit[4617]: SYSCALL arch=c000003e syscall=46 success=yes exit=27164 a0=3 a1=7fff0773b210 a2=0 a3=7fff0773b1fc items=0 ppid=3469 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:53.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:53.483510 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:53.517861 env[1307]: time="2025-08-13T01:07:53.517789029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-975f76598-2gqft,Uid:43d02c55-4060-4d42-8d96-57faedfa9ddb,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a\"" Aug 13 01:07:53.597943 kubelet[2136]: E0813 01:07:53.597886 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:54.143740 systemd-networkd[1079]: calib295fae38fb: Link UP Aug 13 01:07:54.158709 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib295fae38fb: link becomes ready Aug 13 01:07:54.157530 systemd-networkd[1079]: calib295fae38fb: Gained carrier Aug 13 01:07:54.452198 kubelet[2136]: I0813 01:07:54.452026 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6444db5f5d-tjp8w" podStartSLOduration=2.4507467419999998 podStartE2EDuration="8.451998979s" podCreationTimestamp="2025-08-13 01:07:46 +0000 UTC" firstStartedPulling="2025-08-13 01:07:47.188106092 +0000 UTC m=+37.833236051" lastFinishedPulling="2025-08-13 01:07:53.189358329 +0000 UTC m=+43.834488288" observedRunningTime="2025-08-13 01:07:54.427937623 +0000 UTC m=+45.073067582" watchObservedRunningTime="2025-08-13 01:07:54.451998979 +0000 UTC m=+45.097128938" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.239 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--b5ght-eth0 goldmane-58fd7646b9- calico-system 16c33eda-a5e8-41ab-8360-f83220e743eb 1092 0 2025-08-13 01:07:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-b5ght eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib295fae38fb [] [] }} ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.240 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.308 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" HandleID="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.318 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" HandleID="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aff00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-b5ght", "timestamp":"2025-08-13 01:07:53.308397054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.318 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.380 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.380 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.403 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.418 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.427 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.433 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.438 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.438 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.454 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6 Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:53.763 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:54.129 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:54.129 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" host="localhost" Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:54.129 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:54.456788 env[1307]: 2025-08-13 01:07:54.129 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" HandleID="k8s-pod-network.2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.132 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b5ght-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"16c33eda-a5e8-41ab-8360-f83220e743eb", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-b5ght", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib295fae38fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.132 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.132 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib295fae38fb ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.157 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.163 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b5ght-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"16c33eda-a5e8-41ab-8360-f83220e743eb", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6", Pod:"goldmane-58fd7646b9-b5ght", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib295fae38fb", MAC:"f2:01:6e:07:fc:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:54.458113 env[1307]: 2025-08-13 01:07:54.450 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6" Namespace="calico-system" Pod="goldmane-58fd7646b9-b5ght" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:07:54.490000 audit[4644]: NETFILTER_CFG table=filter:118 family=2 entries=48 op=nft_register_chain pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:07:54.490000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=26388 a0=3 a1=7ffe26c9e040 a2=0 a3=7ffe26c9e02c items=0 ppid=3469 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:54.490000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:07:54.507000 audit[4646]: NETFILTER_CFG table=filter:119 family=2 entries=13 op=nft_register_rule pid=4646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:54.507000 audit[4646]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffc1502000 a2=0 a3=7fffc1501fec items=0 ppid=2285 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:54.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:54.511000 audit[4646]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=4646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:54.511000 audit[4646]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fffc1502000 a2=0 a3=7fffc1501fec items=0 ppid=2285 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:54.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:54.608743 kubelet[2136]: E0813 01:07:54.608626 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:07:54.793015 env[1307]: time="2025-08-13T01:07:54.792836799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:54.793015 env[1307]: time="2025-08-13T01:07:54.792882921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:54.793015 env[1307]: time="2025-08-13T01:07:54.792893863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:54.793224 env[1307]: time="2025-08-13T01:07:54.793081684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6 pid=4654 runtime=io.containerd.runc.v2 Aug 13 01:07:54.811392 systemd[1]: run-containerd-runc-k8s.io-2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6-runc.pdAFIe.mount: Deactivated successfully. Aug 13 01:07:54.819691 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:07:54.840820 env[1307]: time="2025-08-13T01:07:54.840772412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b5ght,Uid:16c33eda-a5e8-41ab-8360-f83220e743eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6\"" Aug 13 01:07:55.387680 systemd-networkd[1079]: calic57eac8fe8a: Gained IPv6LL Aug 13 01:07:56.025844 systemd-networkd[1079]: calib295fae38fb: Gained IPv6LL Aug 13 01:07:58.038148 env[1307]: time="2025-08-13T01:07:58.038083968Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.040100 env[1307]: time="2025-08-13T01:07:58.040059529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.041728 env[1307]: time="2025-08-13T01:07:58.041692404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.043251 env[1307]: time="2025-08-13T01:07:58.043198243Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.043879 env[1307]: time="2025-08-13T01:07:58.043838138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:07:58.045029 env[1307]: time="2025-08-13T01:07:58.044977528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:07:58.046057 env[1307]: time="2025-08-13T01:07:58.045978452Z" level=info msg="CreateContainer within sandbox \"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:07:58.060815 env[1307]: time="2025-08-13T01:07:58.060760896Z" level=info msg="CreateContainer within sandbox \"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5db58b1ee151c498548d7b08d63c7624bf26c3226c9e34477e126f0f5b28b0f9\"" Aug 13 01:07:58.061361 env[1307]: time="2025-08-13T01:07:58.061327280Z" level=info msg="StartContainer for \"5db58b1ee151c498548d7b08d63c7624bf26c3226c9e34477e126f0f5b28b0f9\"" Aug 13 01:07:58.089618 kernel: kauditd_printk_skb: 19 callbacks suppressed Aug 13 01:07:58.089791 kernel: audit: type=1130 audit(1755047278.083:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.139:22-10.0.0.1:34784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.139:22-10.0.0.1:34784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.083781 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:34784.service. Aug 13 01:07:58.369000 audit[4710]: USER_ACCT pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.372699 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:58.375628 kernel: audit: type=1101 audit(1755047278.369:447): pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.375672 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:58.371000 audit[4710]: CRED_ACQ pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.378119 systemd-logind[1290]: New session 11 of user core. Aug 13 01:07:58.378398 systemd[1]: Started session-11.scope. Aug 13 01:07:58.382712 kernel: audit: type=1103 audit(1755047278.371:448): pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.382792 kernel: audit: type=1006 audit(1755047278.371:449): pid=4710 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 01:07:58.382829 kernel: audit: type=1300 audit(1755047278.371:449): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b4c4450 a2=3 a3=0 items=0 ppid=1 pid=4710 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.371000 audit[4710]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b4c4450 a2=3 a3=0 items=0 ppid=1 pid=4710 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.387111 kernel: audit: type=1327 audit(1755047278.371:449): proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:58.371000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:58.388518 kernel: audit: type=1105 audit(1755047278.386:450): pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.386000 audit[4710]: USER_START pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.396497 kernel: audit: type=1103 audit(1755047278.388:451): pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.388000 audit[4735]: CRED_ACQ pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.421533 env[1307]: time="2025-08-13T01:07:58.421462342Z" level=info msg="StartContainer for \"5db58b1ee151c498548d7b08d63c7624bf26c3226c9e34477e126f0f5b28b0f9\" returns successfully" Aug 13 01:07:58.533573 sshd[4710]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:58.534000 audit[4710]: USER_END pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.540605 kernel: audit: type=1106 audit(1755047278.534:452): pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.536638 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:34786.service. Aug 13 01:07:58.540548 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:34784.service: Deactivated successfully. Aug 13 01:07:58.534000 audit[4710]: CRED_DISP pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.541374 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:07:58.542081 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:07:58.542992 systemd-logind[1290]: Removed session 11. Aug 13 01:07:58.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.139:22-10.0.0.1:34786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.139:22-10.0.0.1:34784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.545625 kernel: audit: type=1104 audit(1755047278.534:453): pid=4710 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.546008 env[1307]: time="2025-08-13T01:07:58.545972225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.547866 env[1307]: time="2025-08-13T01:07:58.547845354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.549761 env[1307]: time="2025-08-13T01:07:58.549719306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.551597 env[1307]: time="2025-08-13T01:07:58.551521972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:07:58.551907 env[1307]: time="2025-08-13T01:07:58.551878144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:07:58.553873 env[1307]: time="2025-08-13T01:07:58.553497434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:07:58.554725 env[1307]: time="2025-08-13T01:07:58.554251650Z" level=info msg="CreateContainer within sandbox \"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:07:58.569493 env[1307]: time="2025-08-13T01:07:58.569447578Z" level=info msg="CreateContainer within sandbox \"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64724108bc1e78de57abf501ea7421b2a75f7917934015ded8e0d0c77c3d594d\"" Aug 13 01:07:58.571550 env[1307]: time="2025-08-13T01:07:58.570552146Z" level=info msg="StartContainer for \"64724108bc1e78de57abf501ea7421b2a75f7917934015ded8e0d0c77c3d594d\"" Aug 13 01:07:58.579000 audit[4745]: USER_ACCT pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.580059 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 34786 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:58.580000 audit[4745]: CRED_ACQ pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.580000 audit[4745]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd10cd0e40 a2=3 a3=0 items=0 ppid=1 pid=4745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.580000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:58.581452 sshd[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:58.586160 systemd[1]: Started session-12.scope. Aug 13 01:07:58.591964 systemd-logind[1290]: New session 12 of user core. Aug 13 01:07:58.596000 audit[4745]: USER_START pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.599000 audit[4766]: CRED_ACQ pid=4766 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.629230 kubelet[2136]: I0813 01:07:58.628826 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66678997c4-cjt94" podStartSLOduration=26.38650004 podStartE2EDuration="35.628804607s" podCreationTimestamp="2025-08-13 01:07:23 +0000 UTC" firstStartedPulling="2025-08-13 01:07:48.802513177 +0000 UTC m=+39.447643136" lastFinishedPulling="2025-08-13 01:07:58.044817754 +0000 UTC m=+48.689947703" observedRunningTime="2025-08-13 01:07:58.628237682 +0000 UTC m=+49.273367642" watchObservedRunningTime="2025-08-13 01:07:58.628804607 +0000 UTC m=+49.273934566" Aug 13 01:07:58.642787 env[1307]: time="2025-08-13T01:07:58.642731795Z" level=info msg="StartContainer for \"64724108bc1e78de57abf501ea7421b2a75f7917934015ded8e0d0c77c3d594d\" returns successfully" Aug 13 01:07:58.645000 audit[4789]: NETFILTER_CFG table=filter:121 family=2 entries=12 op=nft_register_rule pid=4789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:58.645000 audit[4789]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff93386a10 a2=0 a3=7fff933869fc items=0 ppid=2285 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:58.652000 audit[4789]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=4789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:58.652000 audit[4789]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff93386a10 a2=0 a3=7fff933869fc items=0 ppid=2285 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:58.781359 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:34802.service. Aug 13 01:07:58.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.139:22-10.0.0.1:34802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.783367 sshd[4745]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:58.783000 audit[4745]: USER_END pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.783000 audit[4745]: CRED_DISP pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.139:22-10.0.0.1:34786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.786259 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:34786.service: Deactivated successfully. Aug 13 01:07:58.787149 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:07:58.788232 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:07:58.789220 systemd-logind[1290]: Removed session 12. Aug 13 01:07:58.828000 audit[4800]: USER_ACCT pid=4800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.830611 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 34802 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:07:58.830000 audit[4800]: CRED_ACQ pid=4800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.830000 audit[4800]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2d6c7660 a2=3 a3=0 items=0 ppid=1 pid=4800 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:58.830000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:07:58.831918 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:58.836834 systemd-logind[1290]: New session 13 of user core. Aug 13 01:07:58.837857 systemd[1]: Started session-13.scope. Aug 13 01:07:58.842000 audit[4800]: USER_START pid=4800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.844000 audit[4805]: CRED_ACQ pid=4805 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.973796 sshd[4800]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:58.974000 audit[4800]: USER_END pid=4800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.974000 audit[4800]: CRED_DISP pid=4800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:07:58.977228 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:34802.service: Deactivated successfully. Aug 13 01:07:58.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.139:22-10.0.0.1:34802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:07:58.978403 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:07:58.978946 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:07:58.979959 systemd-logind[1290]: Removed session 13. Aug 13 01:07:59.628379 kubelet[2136]: I0813 01:07:59.627978 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:59.940000 audit[4819]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:59.940000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcd4c6b2b0 a2=0 a3=7ffcd4c6b29c items=0 ppid=2285 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:59.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:07:59.945000 audit[4819]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:07:59.945000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcd4c6b2b0 a2=0 a3=7ffcd4c6b29c items=0 ppid=2285 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:07:59.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:00.630281 kubelet[2136]: I0813 01:08:00.630249 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:08:02.781629 env[1307]: time="2025-08-13T01:08:02.781556309Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:03.077376 env[1307]: time="2025-08-13T01:08:03.077217592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:03.175836 env[1307]: time="2025-08-13T01:08:03.175772988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:03.202444 env[1307]: time="2025-08-13T01:08:03.202369984Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:03.202830 env[1307]: time="2025-08-13T01:08:03.202788399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:08:03.204049 env[1307]: time="2025-08-13T01:08:03.203975559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:08:03.205391 env[1307]: time="2025-08-13T01:08:03.205334829Z" level=info msg="CreateContainer within sandbox \"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:08:03.975646 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:39116.service. Aug 13 01:08:03.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.139:22-10.0.0.1:39116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:04.122626 kernel: kauditd_printk_skb: 35 callbacks suppressed Aug 13 01:08:04.122721 kernel: audit: type=1130 audit(1755047283.974:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.139:22-10.0.0.1:39116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:04.210000 audit[4821]: USER_ACCT pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.211905 sshd[4821]: Accepted publickey for core from 10.0.0.1 port 39116 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:04.229000 audit[4821]: CRED_ACQ pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.232053 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:04.235332 kernel: audit: type=1101 audit(1755047284.210:478): pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.235461 kernel: audit: type=1103 audit(1755047284.229:479): pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.235482 kernel: audit: type=1006 audit(1755047284.229:480): pid=4821 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 01:08:04.235750 systemd-logind[1290]: New session 14 of user core. Aug 13 01:08:04.236534 systemd[1]: Started session-14.scope. Aug 13 01:08:04.229000 audit[4821]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff445ba160 a2=3 a3=0 items=0 ppid=1 pid=4821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:04.241326 kernel: audit: type=1300 audit(1755047284.229:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff445ba160 a2=3 a3=0 items=0 ppid=1 pid=4821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:04.241383 kernel: audit: type=1327 audit(1755047284.229:480): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:04.229000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:04.239000 audit[4821]: USER_START pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.246538 kernel: audit: type=1105 audit(1755047284.239:481): pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.246611 kernel: audit: type=1103 audit(1755047284.241:482): pid=4824 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.241000 audit[4824]: CRED_ACQ pid=4824 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.767169 sshd[4821]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:04.767000 audit[4821]: USER_END pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.773627 kernel: audit: type=1106 audit(1755047284.767:483): pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.767000 audit[4821]: CRED_DISP pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.777607 kernel: audit: type=1104 audit(1755047284.767:484): pid=4821 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:04.779526 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:39116.service: Deactivated successfully. Aug 13 01:08:04.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.139:22-10.0.0.1:39116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:04.780440 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:08:04.782311 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:08:04.783574 systemd-logind[1290]: Removed session 14. Aug 13 01:08:04.910056 env[1307]: time="2025-08-13T01:08:04.910005057Z" level=info msg="CreateContainer within sandbox \"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"39347ccbe7727378b187087489a70037595a03d3caa301b581b8eb074f89b1c6\"" Aug 13 01:08:04.911458 env[1307]: time="2025-08-13T01:08:04.911385904Z" level=info msg="StartContainer for \"39347ccbe7727378b187087489a70037595a03d3caa301b581b8eb074f89b1c6\"" Aug 13 01:08:04.972501 env[1307]: time="2025-08-13T01:08:04.972449778Z" level=info msg="StartContainer for \"39347ccbe7727378b187087489a70037595a03d3caa301b581b8eb074f89b1c6\" returns successfully" Aug 13 01:08:07.336719 env[1307]: time="2025-08-13T01:08:07.336666895Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:07.338940 env[1307]: time="2025-08-13T01:08:07.338893164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:07.340530 env[1307]: time="2025-08-13T01:08:07.340507739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:07.342010 env[1307]: time="2025-08-13T01:08:07.341981877Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:07.342421 env[1307]: time="2025-08-13T01:08:07.342392345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 01:08:07.343439 env[1307]: time="2025-08-13T01:08:07.343419669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 01:08:07.354411 env[1307]: time="2025-08-13T01:08:07.354357745Z" level=info msg="CreateContainer within sandbox \"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 01:08:07.367229 env[1307]: time="2025-08-13T01:08:07.367177071Z" level=info msg="CreateContainer within sandbox \"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1dbd9de285108d0a0a32cf4df1b7d33e4e4da24dbd3eaed944243c445b1a5636\"" Aug 13 01:08:07.367727 env[1307]: time="2025-08-13T01:08:07.367699414Z" level=info msg="StartContainer for \"1dbd9de285108d0a0a32cf4df1b7d33e4e4da24dbd3eaed944243c445b1a5636\"" Aug 13 01:08:07.420686 env[1307]: time="2025-08-13T01:08:07.420627371Z" level=info msg="StartContainer for \"1dbd9de285108d0a0a32cf4df1b7d33e4e4da24dbd3eaed944243c445b1a5636\" returns successfully" Aug 13 01:08:07.674094 kubelet[2136]: I0813 01:08:07.673412 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66678997c4-jk2r2" podStartSLOduration=35.017137201 podStartE2EDuration="44.673392217s" podCreationTimestamp="2025-08-13 01:07:23 +0000 UTC" firstStartedPulling="2025-08-13 01:07:48.896664243 +0000 UTC m=+39.541794202" lastFinishedPulling="2025-08-13 01:07:58.552919259 +0000 UTC m=+49.198049218" observedRunningTime="2025-08-13 01:07:59.658333618 +0000 UTC m=+50.303463577" watchObservedRunningTime="2025-08-13 01:08:07.673392217 +0000 UTC m=+58.318522176" Aug 13 01:08:07.674094 kubelet[2136]: I0813 01:08:07.673924 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-975f76598-2gqft" podStartSLOduration=27.849961681 podStartE2EDuration="41.673907486s" podCreationTimestamp="2025-08-13 01:07:26 +0000 UTC" firstStartedPulling="2025-08-13 01:07:53.51928508 +0000 UTC m=+44.164415039" lastFinishedPulling="2025-08-13 01:08:07.343230885 +0000 UTC m=+57.988360844" observedRunningTime="2025-08-13 01:08:07.673160144 +0000 UTC m=+58.318290103" watchObservedRunningTime="2025-08-13 01:08:07.673907486 +0000 UTC m=+58.319037445" Aug 13 01:08:09.441225 env[1307]: time="2025-08-13T01:08:09.441174706Z" level=info msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.480 [WARNING][4961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b5f5aa8-77fb-4a21-9305-7c26692f1342", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b", Pod:"coredns-7c65d6cfc9-xl6pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica8de8ceef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.480 [INFO][4961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.481 [INFO][4961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" iface="eth0" netns="" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.481 [INFO][4961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.481 [INFO][4961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.507 [INFO][4972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.508 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.508 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.584 [WARNING][4972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.584 [INFO][4972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.586 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:09.590222 env[1307]: 2025-08-13 01:08:09.588 [INFO][4961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.590884 env[1307]: time="2025-08-13T01:08:09.590253259Z" level=info msg="TearDown network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" successfully" Aug 13 01:08:09.590884 env[1307]: time="2025-08-13T01:08:09.590294805Z" level=info msg="StopPodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" returns successfully" Aug 13 01:08:09.593601 env[1307]: time="2025-08-13T01:08:09.593522651Z" level=info msg="RemovePodSandbox for \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" Aug 13 01:08:09.593671 env[1307]: time="2025-08-13T01:08:09.593606766Z" level=info msg="Forcibly stopping sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\"" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.628 [WARNING][4989] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7b5f5aa8-77fb-4a21-9305-7c26692f1342", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"538f04811e18680d99733c7f4512ded7dfd2eaf7c5eb0932677b34047b91ad6b", Pod:"coredns-7c65d6cfc9-xl6pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica8de8ceef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.628 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.628 [INFO][4989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" iface="eth0" netns="" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.628 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.628 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.646 [INFO][4997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.646 [INFO][4997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.646 [INFO][4997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.652 [WARNING][4997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.652 [INFO][4997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" HandleID="k8s-pod-network.1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Workload="localhost-k8s-coredns--7c65d6cfc9--xl6pl-eth0" Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.653 [INFO][4997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:09.656644 env[1307]: 2025-08-13 01:08:09.655 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911" Aug 13 01:08:09.657177 env[1307]: time="2025-08-13T01:08:09.656682704Z" level=info msg="TearDown network for sandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" successfully" Aug 13 01:08:09.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.139:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:09.769699 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:39118.service. Aug 13 01:08:09.771555 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 01:08:09.771643 kernel: audit: type=1130 audit(1755047289.769:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.139:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:09.809000 audit[5005]: USER_ACCT pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.810047 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 39118 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:09.812123 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:09.811000 audit[5005]: CRED_ACQ pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.815842 systemd-logind[1290]: New session 15 of user core. Aug 13 01:08:09.816598 systemd[1]: Started session-15.scope. Aug 13 01:08:09.818395 kernel: audit: type=1101 audit(1755047289.809:487): pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.818463 kernel: audit: type=1103 audit(1755047289.811:488): pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.818487 kernel: audit: type=1006 audit(1755047289.811:489): pid=5005 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 01:08:09.811000 audit[5005]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc93621030 a2=3 a3=0 items=0 ppid=1 pid=5005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:09.825228 kernel: audit: type=1300 audit(1755047289.811:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc93621030 a2=3 a3=0 items=0 ppid=1 pid=5005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:09.811000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:09.826578 kernel: audit: type=1327 audit(1755047289.811:489): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:09.826652 kernel: audit: type=1105 audit(1755047289.820:490): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.820000 audit[5005]: USER_START pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.821000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.833884 kernel: audit: type=1103 audit(1755047289.821:491): pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:09.899534 env[1307]: time="2025-08-13T01:08:09.899463644Z" level=info msg="RemovePodSandbox \"1fcb813042b6fb9ee824009bda2236d26579e833576670924d7d7a6428b49911\" returns successfully" Aug 13 01:08:09.900198 env[1307]: time="2025-08-13T01:08:09.900172843Z" level=info msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.953 [WARNING][5026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d6cb667-b4a4-4c92-a22d-5b802942ec42", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd", Pod:"calico-apiserver-66678997c4-cjt94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5d6c2351a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.953 [INFO][5026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.953 [INFO][5026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" iface="eth0" netns="" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.953 [INFO][5026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.953 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.992 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.992 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.992 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.998 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.998 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:09.999 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.007457 env[1307]: 2025-08-13 01:08:10.002 [INFO][5026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.008076 env[1307]: time="2025-08-13T01:08:10.008038255Z" level=info msg="TearDown network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" successfully" Aug 13 01:08:10.008160 env[1307]: time="2025-08-13T01:08:10.008138539Z" level=info msg="StopPodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" returns successfully" Aug 13 01:08:10.008893 env[1307]: time="2025-08-13T01:08:10.008862149Z" level=info msg="RemovePodSandbox for \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" Aug 13 01:08:10.008943 env[1307]: time="2025-08-13T01:08:10.008893046Z" level=info msg="Forcibly stopping sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\"" Aug 13 01:08:10.076891 sshd[5005]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:10.077000 audit[5005]: USER_END pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:10.082667 kernel: audit: type=1106 audit(1755047290.077:492): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:10.082000 audit[5005]: CRED_DISP pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:10.086752 kernel: audit: type=1104 audit(1755047290.082:493): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:10.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.139:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:10.088388 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:39118.service: Deactivated successfully. Aug 13 01:08:10.089932 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:08:10.090320 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:08:10.091013 systemd-logind[1290]: Removed session 15. Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.057 [WARNING][5052] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d6cb667-b4a4-4c92-a22d-5b802942ec42", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a56ddcee6fe56e1b773ca1f7730135277df577f809e743985de2217c9ced4dd", Pod:"calico-apiserver-66678997c4-cjt94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5d6c2351a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.057 [INFO][5052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.057 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" iface="eth0" netns="" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.057 [INFO][5052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.057 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.082 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.083 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.083 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.091 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.091 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" HandleID="k8s-pod-network.dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Workload="localhost-k8s-calico--apiserver--66678997c4--cjt94-eth0" Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.093 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.096325 env[1307]: 2025-08-13 01:08:10.094 [INFO][5052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb" Aug 13 01:08:10.096809 env[1307]: time="2025-08-13T01:08:10.096360860Z" level=info msg="TearDown network for sandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" successfully" Aug 13 01:08:10.243942 env[1307]: time="2025-08-13T01:08:10.243883486Z" level=info msg="RemovePodSandbox \"dc2bbacd6d2ee41a72789198bb2bb78ff0e4fa5da843d881efcb9bdc6ac7cedb\" returns successfully" Aug 13 01:08:10.244407 env[1307]: time="2025-08-13T01:08:10.244382012Z" level=info msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.276 [WARNING][5081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0", GenerateName:"calico-kube-controllers-975f76598-", Namespace:"calico-system", SelfLink:"", UID:"43d02c55-4060-4d42-8d96-57faedfa9ddb", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975f76598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a", Pod:"calico-kube-controllers-975f76598-2gqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic57eac8fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.277 [INFO][5081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.277 [INFO][5081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" iface="eth0" netns="" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.277 [INFO][5081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.277 [INFO][5081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.306 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.307 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.307 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.313 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.313 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.315 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.319071 env[1307]: 2025-08-13 01:08:10.316 [INFO][5081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.319783 env[1307]: time="2025-08-13T01:08:10.319111388Z" level=info msg="TearDown network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" successfully" Aug 13 01:08:10.319783 env[1307]: time="2025-08-13T01:08:10.319152193Z" level=info msg="StopPodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" returns successfully" Aug 13 01:08:10.320410 env[1307]: time="2025-08-13T01:08:10.320372375Z" level=info msg="RemovePodSandbox for \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" Aug 13 01:08:10.320476 env[1307]: time="2025-08-13T01:08:10.320417799Z" level=info msg="Forcibly stopping sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\"" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.354 [WARNING][5108] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0", GenerateName:"calico-kube-controllers-975f76598-", Namespace:"calico-system", SelfLink:"", UID:"43d02c55-4060-4d42-8d96-57faedfa9ddb", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"975f76598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec7f2c3aceeac9305720308ee5c69e4750625ae76124a821e1bc6d3458e130a", Pod:"calico-kube-controllers-975f76598-2gqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic57eac8fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.354 [INFO][5108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.354 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" iface="eth0" netns="" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.354 [INFO][5108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.354 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.372 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.372 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.372 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.378 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.378 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" HandleID="k8s-pod-network.e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Workload="localhost-k8s-calico--kube--controllers--975f76598--2gqft-eth0" Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.379 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.384155 env[1307]: 2025-08-13 01:08:10.382 [INFO][5108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880" Aug 13 01:08:10.384693 env[1307]: time="2025-08-13T01:08:10.384188819Z" level=info msg="TearDown network for sandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" successfully" Aug 13 01:08:10.478043 env[1307]: time="2025-08-13T01:08:10.477971075Z" level=info msg="RemovePodSandbox \"e09b309f68e507f7a47b9b8418eb0058c2b4414a29eae9e603193a97584b7880\" returns successfully" Aug 13 01:08:10.478494 env[1307]: time="2025-08-13T01:08:10.478466737Z" level=info msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.507 [WARNING][5135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b5ght-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"16c33eda-a5e8-41ab-8360-f83220e743eb", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6", Pod:"goldmane-58fd7646b9-b5ght", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib295fae38fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.507 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.507 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" iface="eth0" netns="" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.507 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.507 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.528 [INFO][5143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.528 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.528 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.534 [WARNING][5143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.534 [INFO][5143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.535 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.539240 env[1307]: 2025-08-13 01:08:10.537 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.539916 env[1307]: time="2025-08-13T01:08:10.539836304Z" level=info msg="TearDown network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" successfully" Aug 13 01:08:10.539916 env[1307]: time="2025-08-13T01:08:10.539877530Z" level=info msg="StopPodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" returns successfully" Aug 13 01:08:10.540474 env[1307]: time="2025-08-13T01:08:10.540433181Z" level=info msg="RemovePodSandbox for \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" Aug 13 01:08:10.540537 env[1307]: time="2025-08-13T01:08:10.540481500Z" level=info msg="Forcibly stopping sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\"" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.570 [WARNING][5161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b5ght-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"16c33eda-a5e8-41ab-8360-f83220e743eb", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6", Pod:"goldmane-58fd7646b9-b5ght", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib295fae38fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.570 [INFO][5161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.570 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" iface="eth0" netns="" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.570 [INFO][5161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.570 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.589 [INFO][5171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.589 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.589 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.594 [WARNING][5171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.594 [INFO][5171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" HandleID="k8s-pod-network.c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Workload="localhost-k8s-goldmane--58fd7646b9--b5ght-eth0" Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.596 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.599487 env[1307]: 2025-08-13 01:08:10.597 [INFO][5161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b" Aug 13 01:08:10.599995 env[1307]: time="2025-08-13T01:08:10.599522218Z" level=info msg="TearDown network for sandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" successfully" Aug 13 01:08:10.603323 env[1307]: time="2025-08-13T01:08:10.603261921Z" level=info msg="RemovePodSandbox \"c77cf00e53a1705434043a566ea205f82b812c79fd55607a6548558334a2382b\" returns successfully" Aug 13 01:08:10.603911 env[1307]: time="2025-08-13T01:08:10.603868625Z" level=info msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" Aug 13 01:08:10.674601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772075650.mount: Deactivated successfully. Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.634 [WARNING][5188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b3501aa-75e5-451f-8e19-819e941f33bc", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40", Pod:"calico-apiserver-66678997c4-jk2r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ee9051bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.635 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.635 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" iface="eth0" netns="" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.636 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.636 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.660 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.662 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.662 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.667 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.668 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.669 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.677456 env[1307]: 2025-08-13 01:08:10.675 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.677929 env[1307]: time="2025-08-13T01:08:10.677517336Z" level=info msg="TearDown network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" successfully" Aug 13 01:08:10.677929 env[1307]: time="2025-08-13T01:08:10.677560335Z" level=info msg="StopPodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" returns successfully" Aug 13 01:08:10.677929 env[1307]: time="2025-08-13T01:08:10.677841171Z" level=info msg="RemovePodSandbox for \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" Aug 13 01:08:10.677929 env[1307]: time="2025-08-13T01:08:10.677873581Z" level=info msg="Forcibly stopping sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\"" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.708 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0", GenerateName:"calico-apiserver-66678997c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b3501aa-75e5-451f-8e19-819e941f33bc", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66678997c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5da20bdfe9f6ef8663197fc5238d10b063faadcfe2fa81504ca3f8430ea45f40", Pod:"calico-apiserver-66678997c4-jk2r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ee9051bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.709 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.709 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" iface="eth0" netns="" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.709 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.709 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.727 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.727 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.727 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.732 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.732 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" HandleID="k8s-pod-network.9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Workload="localhost-k8s-calico--apiserver--66678997c4--jk2r2-eth0" Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.733 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.737219 env[1307]: 2025-08-13 01:08:10.735 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3" Aug 13 01:08:10.737747 env[1307]: time="2025-08-13T01:08:10.737249595Z" level=info msg="TearDown network for sandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" successfully" Aug 13 01:08:10.740767 env[1307]: time="2025-08-13T01:08:10.740742934Z" level=info msg="RemovePodSandbox \"9af954070f3d08557f2fe40a80a6f845c56400b482c894702c02787450ca34d3\" returns successfully" Aug 13 01:08:10.741340 env[1307]: time="2025-08-13T01:08:10.741308494Z" level=info msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.775 [WARNING][5240] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgm6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2333ecfa-adb6-4791-8fd0-6a082b51d429", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722", Pod:"csi-node-driver-bgm6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib68604f2712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.775 [INFO][5240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.776 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" iface="eth0" netns="" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.776 [INFO][5240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.776 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.797 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.797 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.797 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.801 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.801 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.802 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.806918 env[1307]: 2025-08-13 01:08:10.804 [INFO][5240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.807560 env[1307]: time="2025-08-13T01:08:10.806953498Z" level=info msg="TearDown network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" successfully" Aug 13 01:08:10.807560 env[1307]: time="2025-08-13T01:08:10.806988823Z" level=info msg="StopPodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" returns successfully" Aug 13 01:08:10.807662 env[1307]: time="2025-08-13T01:08:10.807569920Z" level=info msg="RemovePodSandbox for \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" Aug 13 01:08:10.807662 env[1307]: time="2025-08-13T01:08:10.807631824Z" level=info msg="Forcibly stopping sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\"" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.861 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgm6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2333ecfa-adb6-4791-8fd0-6a082b51d429", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722", Pod:"csi-node-driver-bgm6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib68604f2712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.873 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.873 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" iface="eth0" netns="" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.874 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.874 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.931 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.932 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.932 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.937 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.937 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" HandleID="k8s-pod-network.f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Workload="localhost-k8s-csi--node--driver--bgm6z-eth0" Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.938 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:10.942093 env[1307]: 2025-08-13 01:08:10.940 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d" Aug 13 01:08:10.942093 env[1307]: time="2025-08-13T01:08:10.942053326Z" level=info msg="TearDown network for sandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" successfully" Aug 13 01:08:11.120814 env[1307]: time="2025-08-13T01:08:11.118543746Z" level=info msg="RemovePodSandbox \"f76d8208a62cadbff556e17289582a6ead8e8fc664473f0e7220778c374d2a8d\" returns successfully" Aug 13 01:08:11.120814 env[1307]: time="2025-08-13T01:08:11.119063564Z" level=info msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.152 [WARNING][5290] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" WorkloadEndpoint="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.152 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.152 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" iface="eth0" netns="" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.153 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.153 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.174 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.175 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.175 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.180 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.181 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.182 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:11.185918 env[1307]: 2025-08-13 01:08:11.184 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.186372 env[1307]: time="2025-08-13T01:08:11.185942498Z" level=info msg="TearDown network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" successfully" Aug 13 01:08:11.186372 env[1307]: time="2025-08-13T01:08:11.185980026Z" level=info msg="StopPodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" returns successfully" Aug 13 01:08:11.186700 env[1307]: time="2025-08-13T01:08:11.186650702Z" level=info msg="RemovePodSandbox for \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" Aug 13 01:08:11.186890 env[1307]: time="2025-08-13T01:08:11.186700684Z" level=info msg="Forcibly stopping sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\"" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.216 [WARNING][5316] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" WorkloadEndpoint="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.216 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.216 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" iface="eth0" netns="" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.216 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.216 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.235 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.235 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.235 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.240 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.240 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" HandleID="k8s-pod-network.e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Workload="localhost-k8s-whisker--c9c7549f4--nlhf8-eth0" Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.242 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:11.248500 env[1307]: 2025-08-13 01:08:11.246 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b" Aug 13 01:08:11.248500 env[1307]: time="2025-08-13T01:08:11.248464286Z" level=info msg="TearDown network for sandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" successfully" Aug 13 01:08:11.256367 env[1307]: time="2025-08-13T01:08:11.256317130Z" level=info msg="RemovePodSandbox \"e93dd9c64a1c1bd58f885b9019aaa0c5940c65dafcadca0f0a4d0e8135fc124b\" returns successfully" Aug 13 01:08:11.257434 env[1307]: time="2025-08-13T01:08:11.257399083Z" level=info msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.290 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b672ef7-dbd1-4d90-9a79-75019f71379f", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b", Pod:"coredns-7c65d6cfc9-tbrxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7c124b65d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.290 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.290 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" iface="eth0" netns="" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.290 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.290 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.308 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.308 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.309 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.314 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.314 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.315 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:11.319258 env[1307]: 2025-08-13 01:08:11.317 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.319771 env[1307]: time="2025-08-13T01:08:11.319284069Z" level=info msg="TearDown network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" successfully" Aug 13 01:08:11.319771 env[1307]: time="2025-08-13T01:08:11.319327137Z" level=info msg="StopPodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" returns successfully" Aug 13 01:08:11.320008 env[1307]: time="2025-08-13T01:08:11.319928966Z" level=info msg="RemovePodSandbox for \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" Aug 13 01:08:11.320278 env[1307]: time="2025-08-13T01:08:11.320219180Z" level=info msg="Forcibly stopping sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\"" Aug 13 01:08:11.365771 env[1307]: time="2025-08-13T01:08:11.365701825Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:11.368376 env[1307]: time="2025-08-13T01:08:11.368343450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:11.370098 env[1307]: time="2025-08-13T01:08:11.370055573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:11.371850 env[1307]: time="2025-08-13T01:08:11.371787945Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:11.372523 env[1307]: time="2025-08-13T01:08:11.372485309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 01:08:11.375228 env[1307]: time="2025-08-13T01:08:11.374990003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:08:11.378455 env[1307]: time="2025-08-13T01:08:11.378425651Z" level=info msg="CreateContainer within sandbox \"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.351 [WARNING][5369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b672ef7-dbd1-4d90-9a79-75019f71379f", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5adb25c8a8199e1e2fe51a61ce86566030e7fa95a45f3378e912df5b029351b", Pod:"coredns-7c65d6cfc9-tbrxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7c124b65d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.351 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.352 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" iface="eth0" netns="" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.352 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.352 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.371 [INFO][5377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.371 [INFO][5377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.371 [INFO][5377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.379 [WARNING][5377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.379 [INFO][5377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" HandleID="k8s-pod-network.feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Workload="localhost-k8s-coredns--7c65d6cfc9--tbrxc-eth0" Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.384 [INFO][5377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:11.390874 env[1307]: 2025-08-13 01:08:11.385 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b" Aug 13 01:08:11.391436 env[1307]: time="2025-08-13T01:08:11.390889227Z" level=info msg="TearDown network for sandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" successfully" Aug 13 01:08:11.393781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743516795.mount: Deactivated successfully. Aug 13 01:08:11.397619 env[1307]: time="2025-08-13T01:08:11.397547651Z" level=info msg="RemovePodSandbox \"feabc261fa47963e5b2352683f1610050bb307507df7c40827c59c3ba815823b\" returns successfully" Aug 13 01:08:11.401068 env[1307]: time="2025-08-13T01:08:11.401022201Z" level=info msg="CreateContainer within sandbox \"2babc8abb1138f1c6393a4809a0a214bc5f9cdcdb8d1429abf2bcc3e2e9d18c6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fa656a05a72559ca66e1bb60bbdf6ed750493bbff75e6b3e05039ac5e50f16de\"" Aug 13 01:08:11.401503 env[1307]: time="2025-08-13T01:08:11.401475957Z" level=info msg="StartContainer for \"fa656a05a72559ca66e1bb60bbdf6ed750493bbff75e6b3e05039ac5e50f16de\"" Aug 13 01:08:11.470735 env[1307]: time="2025-08-13T01:08:11.470672209Z" level=info msg="StartContainer for \"fa656a05a72559ca66e1bb60bbdf6ed750493bbff75e6b3e05039ac5e50f16de\" returns successfully" Aug 13 01:08:11.710179 kubelet[2136]: I0813 01:08:11.710072 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-b5ght" podStartSLOduration=30.165806111 podStartE2EDuration="46.698003278s" podCreationTimestamp="2025-08-13 01:07:25 +0000 UTC" firstStartedPulling="2025-08-13 01:07:54.841880846 +0000 UTC m=+45.487010805" lastFinishedPulling="2025-08-13 01:08:11.374078013 +0000 UTC m=+62.019207972" observedRunningTime="2025-08-13 01:08:11.697842732 +0000 UTC m=+62.342972721" watchObservedRunningTime="2025-08-13 01:08:11.698003278 +0000 UTC m=+62.343133237" Aug 13 01:08:11.715000 audit[5433]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:11.715000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff4933c860 a2=0 a3=7fff4933c84c items=0 ppid=2285 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:11.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:11.719000 audit[5433]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:11.719000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff4933c860 a2=0 a3=7fff4933c84c items=0 ppid=2285 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:11.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:12.705518 systemd[1]: run-containerd-runc-k8s.io-fa656a05a72559ca66e1bb60bbdf6ed750493bbff75e6b3e05039ac5e50f16de-runc.t2OAQe.mount: Deactivated successfully. Aug 13 01:08:13.281486 env[1307]: time="2025-08-13T01:08:13.281425392Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:13.283404 env[1307]: time="2025-08-13T01:08:13.283342359Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:13.284922 env[1307]: time="2025-08-13T01:08:13.284879943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:13.286417 env[1307]: time="2025-08-13T01:08:13.286385239Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:08:13.286890 env[1307]: time="2025-08-13T01:08:13.286861340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:08:13.289116 env[1307]: time="2025-08-13T01:08:13.289081728Z" level=info msg="CreateContainer within sandbox \"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:08:13.301088 env[1307]: time="2025-08-13T01:08:13.301012493Z" level=info msg="CreateContainer within sandbox \"01e2ab4b3cf896c49e7989165e88bdd59fee6a5a9d8421018e83bd0353fc3722\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"72ed29114ff8f23a4a1c75efb7bfd421033742cabd1b6fc244e3c0d68d8a7d81\"" Aug 13 01:08:13.301746 env[1307]: time="2025-08-13T01:08:13.301711807Z" level=info msg="StartContainer for \"72ed29114ff8f23a4a1c75efb7bfd421033742cabd1b6fc244e3c0d68d8a7d81\"" Aug 13 01:08:13.347538 env[1307]: time="2025-08-13T01:08:13.347490037Z" level=info msg="StartContainer for \"72ed29114ff8f23a4a1c75efb7bfd421033742cabd1b6fc244e3c0d68d8a7d81\" returns successfully" Aug 13 01:08:13.534557 kubelet[2136]: I0813 01:08:13.534428 2136 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:08:13.536027 kubelet[2136]: I0813 01:08:13.535998 2136 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:08:13.702786 kubelet[2136]: I0813 01:08:13.702056 2136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bgm6z" podStartSLOduration=24.428107546 podStartE2EDuration="47.702031739s" podCreationTimestamp="2025-08-13 01:07:26 +0000 UTC" firstStartedPulling="2025-08-13 01:07:50.013683137 +0000 UTC m=+40.658813096" lastFinishedPulling="2025-08-13 01:08:13.28760733 +0000 UTC m=+63.932737289" observedRunningTime="2025-08-13 01:08:13.7017078 +0000 UTC m=+64.346837759" watchObservedRunningTime="2025-08-13 01:08:13.702031739 +0000 UTC m=+64.347161688" Aug 13 01:08:15.078698 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:48516.service. Aug 13 01:08:15.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.139:22-10.0.0.1:48516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:15.079807 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 01:08:15.079882 kernel: audit: type=1130 audit(1755047295.078:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.139:22-10.0.0.1:48516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:15.120000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.121007 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 48516 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:15.124650 kernel: audit: type=1101 audit(1755047295.120:498): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.124000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.125377 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:15.129864 systemd-logind[1290]: New session 16 of user core. Aug 13 01:08:15.130893 kernel: audit: type=1103 audit(1755047295.124:499): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.130939 kernel: audit: type=1006 audit(1755047295.124:500): pid=5504 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 01:08:15.124000 audit[5504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4c816540 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:15.131009 systemd[1]: Started session-16.scope. Aug 13 01:08:15.135850 kernel: audit: type=1300 audit(1755047295.124:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4c816540 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:15.135901 kernel: audit: type=1327 audit(1755047295.124:500): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:15.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:15.136000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.140816 kernel: audit: type=1105 audit(1755047295.136:501): pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.140000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.144615 kernel: audit: type=1103 audit(1755047295.140:502): pid=5507 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.358867 sshd[5504]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:15.359000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.361083 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:48516.service: Deactivated successfully. Aug 13 01:08:15.362049 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:08:15.359000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.364268 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:08:15.365046 systemd-logind[1290]: Removed session 16. Aug 13 01:08:15.367438 kernel: audit: type=1106 audit(1755047295.359:503): pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.367493 kernel: audit: type=1104 audit(1755047295.359:504): pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:15.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.139:22-10.0.0.1:48516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:17.473000 audit[5544]: NETFILTER_CFG table=filter:127 family=2 entries=11 op=nft_register_rule pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:17.473000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc7be607b0 a2=0 a3=7ffc7be6079c items=0 ppid=2285 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:17.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:17.481000 audit[5544]: NETFILTER_CFG table=nat:128 family=2 entries=29 op=nft_register_chain pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:17.481000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffc7be607b0 a2=0 a3=7ffc7be6079c items=0 ppid=2285 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:17.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:20.363691 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:47234.service. Aug 13 01:08:20.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.139:22-10.0.0.1:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:20.365030 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 01:08:20.365096 kernel: audit: type=1130 audit(1755047300.363:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.139:22-10.0.0.1:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:20.429000 audit[5566]: USER_ACCT pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.430499 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 47234 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:20.435746 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:20.432000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.440873 kernel: audit: type=1101 audit(1755047300.429:509): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.440977 kernel: audit: type=1103 audit(1755047300.432:510): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.441007 kernel: audit: type=1006 audit(1755047300.432:511): pid=5566 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 01:08:20.443354 kernel: audit: type=1300 audit(1755047300.432:511): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe618a8ea0 a2=3 a3=0 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:20.432000 audit[5566]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe618a8ea0 a2=3 a3=0 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:20.447007 systemd-logind[1290]: New session 17 of user core. Aug 13 01:08:20.432000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:20.448869 systemd[1]: Started session-17.scope. Aug 13 01:08:20.450470 kernel: audit: type=1327 audit(1755047300.432:511): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:20.462000 audit[5566]: USER_START pid=5566 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.464000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.471396 kernel: audit: type=1105 audit(1755047300.462:512): pid=5566 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.471520 kernel: audit: type=1103 audit(1755047300.464:513): pid=5569 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.692468 sshd[5566]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:20.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.139:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:20.698051 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:47236.service. Aug 13 01:08:20.702679 kernel: audit: type=1130 audit(1755047300.697:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.139:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:20.702000 audit[5566]: USER_END pid=5566 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.705020 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:47234.service: Deactivated successfully. Aug 13 01:08:20.706844 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:08:20.707607 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:08:20.708653 kernel: audit: type=1106 audit(1755047300.702:515): pid=5566 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.709080 systemd-logind[1290]: Removed session 17. Aug 13 01:08:20.702000 audit[5566]: CRED_DISP pid=5566 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.139:22-10.0.0.1:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:20.749000 audit[5601]: USER_ACCT pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.750350 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 47236 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:20.750000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.751000 audit[5601]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf5ec3bd0 a2=3 a3=0 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:20.751000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:20.751951 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:20.757002 systemd-logind[1290]: New session 18 of user core. Aug 13 01:08:20.758091 systemd[1]: Started session-18.scope. Aug 13 01:08:20.765000 audit[5601]: USER_START pid=5601 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:20.767000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.165338 sshd[5601]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:21.165000 audit[5601]: USER_END pid=5601 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.167000 audit[5601]: CRED_DISP pid=5601 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.168681 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:47246.service. Aug 13 01:08:21.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.139:22-10.0.0.1:47246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:21.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.139:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:21.172788 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:47236.service: Deactivated successfully. Aug 13 01:08:21.173714 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:08:21.173835 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:08:21.176538 systemd-logind[1290]: Removed session 18. Aug 13 01:08:21.219000 audit[5613]: USER_ACCT pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.220796 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 47246 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:21.224000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.224000 audit[5613]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2b9aa390 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:21.224000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:21.225362 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:21.234984 systemd-logind[1290]: New session 19 of user core. Aug 13 01:08:21.235472 systemd[1]: Started session-19.scope. Aug 13 01:08:21.261000 audit[5613]: USER_START pid=5613 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.262000 audit[5618]: CRED_ACQ pid=5618 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:21.461865 kubelet[2136]: I0813 01:08:21.461721 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:08:21.517000 audit[5626]: NETFILTER_CFG table=filter:129 family=2 entries=9 op=nft_register_rule pid=5626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:21.517000 audit[5626]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd0f3aef20 a2=0 a3=7ffd0f3aef0c items=0 ppid=2285 pid=5626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:21.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:21.526000 audit[5626]: NETFILTER_CFG table=nat:130 family=2 entries=31 op=nft_register_chain pid=5626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:21.526000 audit[5626]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffd0f3aef20 a2=0 a3=7ffd0f3aef0c items=0 ppid=2285 pid=5626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:21.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:22.464491 kubelet[2136]: E0813 01:08:22.464451 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:08:24.076000 audit[5631]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:24.076000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffcab556530 a2=0 a3=7ffcab55651c items=0 ppid=2285 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:24.082000 audit[5631]: NETFILTER_CFG table=nat:132 family=2 entries=26 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:24.082000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffcab556530 a2=0 a3=0 items=0 ppid=2285 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:24.103000 audit[5633]: NETFILTER_CFG table=filter:133 family=2 entries=32 op=nft_register_rule pid=5633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:24.103000 audit[5633]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffef975e230 a2=0 a3=7ffef975e21c items=0 ppid=2285 pid=5633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:24.109000 audit[5633]: NETFILTER_CFG table=nat:134 family=2 entries=26 op=nft_register_rule pid=5633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:24.110814 sshd[5613]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:24.109000 audit[5633]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffef975e230 a2=0 a3=0 items=0 ppid=2285 pid=5633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:24.111000 audit[5613]: USER_END pid=5613 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.111000 audit[5613]: CRED_DISP pid=5613 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.113308 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:47262.service. Aug 13 01:08:24.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.139:22-10.0.0.1:47262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:24.113827 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:47246.service: Deactivated successfully. Aug 13 01:08:24.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.139:22-10.0.0.1:47246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:24.114510 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:08:24.115470 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:08:24.116442 systemd-logind[1290]: Removed session 19. Aug 13 01:08:24.153000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.154203 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 47262 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:24.154000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.154000 audit[5634]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1d60a590 a2=3 a3=0 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.154000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:24.155295 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:24.158967 systemd-logind[1290]: New session 20 of user core. Aug 13 01:08:24.159769 systemd[1]: Started session-20.scope. Aug 13 01:08:24.163000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.164000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.532636 sshd[5634]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:24.533000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.533000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.139:22-10.0.0.1:47276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:24.535467 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:47276.service. Aug 13 01:08:24.536086 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:47262.service: Deactivated successfully. Aug 13 01:08:24.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.139:22-10.0.0.1:47262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:24.536814 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:08:24.537770 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:08:24.538726 systemd-logind[1290]: Removed session 20. Aug 13 01:08:24.606446 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 47276 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:24.606887 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:24.604000 audit[5647]: USER_ACCT pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.605000 audit[5647]: CRED_ACQ pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.606000 audit[5647]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5d150e00 a2=3 a3=0 items=0 ppid=1 pid=5647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:24.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:24.612433 systemd-logind[1290]: New session 21 of user core. Aug 13 01:08:24.613147 systemd[1]: Started session-21.scope. Aug 13 01:08:24.618000 audit[5647]: USER_START pid=5647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.619000 audit[5652]: CRED_ACQ pid=5652 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.719605 sshd[5647]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:24.720000 audit[5647]: USER_END pid=5647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.720000 audit[5647]: CRED_DISP pid=5647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:24.722330 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:47276.service: Deactivated successfully. Aug 13 01:08:24.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.139:22-10.0.0.1:47276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:24.723534 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:08:24.723688 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:08:24.724507 systemd-logind[1290]: Removed session 21. Aug 13 01:08:29.722626 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:47288.service. Aug 13 01:08:29.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.139:22-10.0.0.1:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:29.723826 kernel: kauditd_printk_skb: 63 callbacks suppressed Aug 13 01:08:29.886121 kernel: audit: type=1130 audit(1755047309.722:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.139:22-10.0.0.1:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:29.925000 audit[5670]: USER_ACCT pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.925963 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 47288 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:29.927957 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:29.927000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.931948 systemd-logind[1290]: New session 22 of user core. Aug 13 01:08:29.932707 systemd[1]: Started session-22.scope. Aug 13 01:08:29.933941 kernel: audit: type=1101 audit(1755047309.925:560): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.934019 kernel: audit: type=1103 audit(1755047309.927:561): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.934042 kernel: audit: type=1006 audit(1755047309.927:562): pid=5670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 13 01:08:29.927000 audit[5670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc246f2a60 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:29.940604 kernel: audit: type=1300 audit(1755047309.927:562): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc246f2a60 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:29.940657 kernel: audit: type=1327 audit(1755047309.927:562): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:29.927000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:29.937000 audit[5670]: USER_START pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.946863 kernel: audit: type=1105 audit(1755047309.937:563): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.946915 kernel: audit: type=1103 audit(1755047309.938:564): pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:29.938000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:30.079384 sshd[5670]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:30.080000 audit[5670]: USER_END pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:30.081966 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:47288.service: Deactivated successfully. Aug 13 01:08:30.082985 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:08:30.083364 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:08:30.084024 systemd-logind[1290]: Removed session 22. Aug 13 01:08:30.080000 audit[5670]: CRED_DISP pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:30.088052 kernel: audit: type=1106 audit(1755047310.080:565): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:30.088121 kernel: audit: type=1104 audit(1755047310.080:566): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:30.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.139:22-10.0.0.1:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:31.301000 audit[5686]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:31.301000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffbf278ab0 a2=0 a3=7fffbf278a9c items=0 ppid=2285 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:31.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:31.309000 audit[5686]: NETFILTER_CFG table=nat:136 family=2 entries=110 op=nft_register_chain pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:31.309000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fffbf278ab0 a2=0 a3=7fffbf278a9c items=0 ppid=2285 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:31.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:33.429160 systemd[1]: run-containerd-runc-k8s.io-1dbd9de285108d0a0a32cf4df1b7d33e4e4da24dbd3eaed944243c445b1a5636-runc.tvbNaC.mount: Deactivated successfully. Aug 13 01:08:34.456846 kubelet[2136]: E0813 01:08:34.456805 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:08:35.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.139:22-10.0.0.1:37630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:35.082154 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:37630.service. Aug 13 01:08:35.083612 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 01:08:35.083673 kernel: audit: type=1130 audit(1755047315.081:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.139:22-10.0.0.1:37630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:35.123000 audit[5709]: USER_ACCT pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.124660 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 37630 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:35.127011 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:35.126000 audit[5709]: CRED_ACQ pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.130512 systemd-logind[1290]: New session 23 of user core. Aug 13 01:08:35.131291 systemd[1]: Started session-23.scope. Aug 13 01:08:35.132325 kernel: audit: type=1101 audit(1755047315.123:571): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.132472 kernel: audit: type=1103 audit(1755047315.126:572): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.135201 kernel: audit: type=1006 audit(1755047315.126:573): pid=5709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 01:08:35.126000 audit[5709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe35228ce0 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:35.139460 kernel: audit: type=1300 audit(1755047315.126:573): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe35228ce0 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:35.139507 kernel: audit: type=1327 audit(1755047315.126:573): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:35.126000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:35.135000 audit[5709]: USER_START pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.146177 kernel: audit: type=1105 audit(1755047315.135:574): pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.146245 kernel: audit: type=1103 audit(1755047315.136:575): pid=5712 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.136000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.277682 sshd[5709]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:35.278000 audit[5709]: USER_END pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.280121 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:37630.service: Deactivated successfully. Aug 13 01:08:35.281414 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:08:35.281470 systemd-logind[1290]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:08:35.282574 systemd-logind[1290]: Removed session 23. Aug 13 01:08:35.278000 audit[5709]: CRED_DISP pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.286784 kernel: audit: type=1106 audit(1755047315.278:576): pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.286848 kernel: audit: type=1104 audit(1755047315.278:577): pid=5709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:35.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.139:22-10.0.0.1:37630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:36.763649 kubelet[2136]: I0813 01:08:36.763606 2136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:08:36.800000 audit[5724]: NETFILTER_CFG table=filter:137 family=2 entries=8 op=nft_register_rule pid=5724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:36.800000 audit[5724]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe60583e50 a2=0 a3=7ffe60583e3c items=0 ppid=2285 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:36.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:36.807000 audit[5724]: NETFILTER_CFG table=nat:138 family=2 entries=62 op=nft_register_chain pid=5724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:08:36.807000 audit[5724]: SYSCALL arch=c000003e syscall=46 success=yes exit=21988 a0=3 a1=7ffe60583e50 a2=0 a3=7ffe60583e3c items=0 ppid=2285 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:36.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:08:38.456995 kubelet[2136]: E0813 01:08:38.456948 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:08:40.281789 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:38174.service. Aug 13 01:08:40.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.139:22-10.0.0.1:38174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:40.287114 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 01:08:40.287195 kernel: audit: type=1130 audit(1755047320.280:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.139:22-10.0.0.1:38174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:40.324000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.330606 kernel: audit: type=1101 audit(1755047320.324:582): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.327391 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:40.330960 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 38174 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:40.325000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.335646 kernel: audit: type=1103 audit(1755047320.325:583): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.337973 systemd[1]: Started session-24.scope. Aug 13 01:08:40.338933 systemd-logind[1290]: New session 24 of user core. Aug 13 01:08:40.344609 kernel: audit: type=1006 audit(1755047320.325:584): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 01:08:40.325000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd63f918c0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:40.350616 kernel: audit: type=1300 audit(1755047320.325:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd63f918c0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:40.325000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:40.357609 kernel: audit: type=1327 audit(1755047320.325:584): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:40.342000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.367899 kernel: audit: type=1105 audit(1755047320.342:585): pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.368090 kernel: audit: type=1103 audit(1755047320.343:586): pid=5728 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.343000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.493753 sshd[5725]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:40.493000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.497000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.499621 kernel: audit: type=1106 audit(1755047320.493:587): pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.499691 kernel: audit: type=1104 audit(1755047320.497:588): pid=5725 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:40.500754 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:38174.service: Deactivated successfully. Aug 13 01:08:40.502154 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:08:40.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.139:22-10.0.0.1:38174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:40.502755 systemd-logind[1290]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:08:40.503548 systemd-logind[1290]: Removed session 24. Aug 13 01:08:45.497507 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:38176.service. Aug 13 01:08:45.502246 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 01:08:45.502293 kernel: audit: type=1130 audit(1755047325.496:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.139:22-10.0.0.1:38176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:45.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.139:22-10.0.0.1:38176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:08:45.542970 kernel: audit: type=1101 audit(1755047325.534:591): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.543087 kernel: audit: type=1103 audit(1755047325.535:592): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.534000 audit[5741]: USER_ACCT pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.535000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.537085 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:08:45.543502 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:08:45.543406 systemd-logind[1290]: New session 25 of user core. Aug 13 01:08:45.543810 systemd[1]: Started session-25.scope. Aug 13 01:08:45.548002 kernel: audit: type=1006 audit(1755047325.535:593): pid=5741 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Aug 13 01:08:45.535000 audit[5741]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff90185fe0 a2=3 a3=0 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:45.553418 kernel: audit: type=1300 audit(1755047325.535:593): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff90185fe0 a2=3 a3=0 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:08:45.553465 kernel: audit: type=1327 audit(1755047325.535:593): proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:45.535000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:08:45.547000 audit[5741]: USER_START pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.548000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.561175 kernel: audit: type=1105 audit(1755047325.547:594): pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.561230 kernel: audit: type=1103 audit(1755047325.548:595): pid=5744 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.677834 sshd[5741]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:45.677000 audit[5741]: USER_END pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.680513 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:38176.service: Deactivated successfully. Aug 13 01:08:45.681693 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:08:45.681747 systemd-logind[1290]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:08:45.682437 systemd-logind[1290]: Removed session 25. Aug 13 01:08:45.677000 audit[5741]: CRED_DISP pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.686273 kernel: audit: type=1106 audit(1755047325.677:596): pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.686398 kernel: audit: type=1104 audit(1755047325.677:597): pid=5741 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 01:08:45.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.139:22-10.0.0.1:38176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'