Aug 13 00:56:19.156959 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:56:19.157006 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:56:19.157026 kernel: BIOS-provided physical RAM map: Aug 13 00:56:19.157040 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 00:56:19.157054 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 00:56:19.157068 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 00:56:19.157087 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 00:56:19.157102 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 00:56:19.157115 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd27bfff] usable Aug 13 00:56:19.157129 kernel: BIOS-e820: [mem 0x00000000bd27c000-0x00000000bd285fff] ACPI data Aug 13 00:56:19.157143 kernel: BIOS-e820: [mem 0x00000000bd286000-0x00000000bf8ecfff] usable Aug 13 00:56:19.157157 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Aug 13 00:56:19.157171 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 00:56:19.157186 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 00:56:19.157209 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 00:56:19.157225 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 00:56:19.157240 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 00:56:19.157255 kernel: NX (Execute Disable) protection: active Aug 13 00:56:19.157270 kernel: efi: EFI v2.70 by EDK II Aug 13 00:56:19.157286 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd27c018 Aug 13 00:56:19.157301 kernel: random: crng init done Aug 13 00:56:19.157317 kernel: SMBIOS 2.4 present. Aug 13 00:56:19.157336 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 00:56:19.157352 kernel: Hypervisor detected: KVM Aug 13 00:56:19.157367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:56:19.157383 kernel: kvm-clock: cpu 0, msr 2419e001, primary cpu clock Aug 13 00:56:19.157399 kernel: kvm-clock: using sched offset of 14506038501 cycles Aug 13 00:56:19.157415 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:56:19.157431 kernel: tsc: Detected 2299.998 MHz processor Aug 13 00:56:19.157445 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:56:19.157460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:56:19.157477 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 00:56:19.157497 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:56:19.157513 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 00:56:19.157528 kernel: Using GB pages for direct mapping Aug 13 00:56:19.157544 kernel: Secure boot disabled Aug 13 00:56:19.157604 kernel: ACPI: Early table checksum verification disabled Aug 13 00:56:19.157621 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 00:56:19.157637 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 00:56:19.157653 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 00:56:19.157688 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 00:56:19.157705 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 00:56:19.157721 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 00:56:19.157736 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 00:56:19.157754 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 00:56:19.157771 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 00:56:19.157790 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 00:56:19.157807 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 00:56:19.157824 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 00:56:19.157840 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 00:56:19.157856 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 00:56:19.157874 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 00:56:19.157891 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 00:56:19.157908 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 00:56:19.157924 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 00:56:19.157945 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 00:56:19.157962 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 00:56:19.157979 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:56:19.157995 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:56:19.158012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:56:19.158030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 00:56:19.158047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 00:56:19.158065 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 00:56:19.158082 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 00:56:19.158101 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Aug 13 00:56:19.158118 kernel: Zone ranges: Aug 13 00:56:19.158134 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:56:19.158151 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:56:19.158168 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 00:56:19.158185 kernel: Movable zone start for each node Aug 13 00:56:19.158202 kernel: Early memory node ranges Aug 13 00:56:19.158218 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 00:56:19.158235 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 00:56:19.158256 kernel: node 0: [mem 0x0000000000100000-0x00000000bd27bfff] Aug 13 00:56:19.158273 kernel: node 0: [mem 0x00000000bd286000-0x00000000bf8ecfff] Aug 13 00:56:19.158290 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 00:56:19.158307 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 00:56:19.158323 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 00:56:19.158341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:56:19.158358 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 00:56:19.158375 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 00:56:19.158392 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Aug 13 00:56:19.158412 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 00:56:19.158429 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 00:56:19.158446 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 00:56:19.158463 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:56:19.158480 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:56:19.158497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:56:19.158514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:56:19.158531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:56:19.158548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:56:19.158614 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:56:19.158629 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:56:19.158645 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 00:56:19.158670 kernel: Booting paravirtualized kernel on KVM Aug 13 00:56:19.158688 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:56:19.158705 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:56:19.158723 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:56:19.158740 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:56:19.158756 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:56:19.158777 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:56:19.158794 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:56:19.158811 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Aug 13 00:56:19.158829 kernel: Policy zone: Normal Aug 13 00:56:19.158847 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:56:19.158865 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:56:19.158882 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:56:19.158899 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:56:19.158915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:56:19.158937 kernel: Memory: 7515420K/7860544K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 344864K reserved, 0K cma-reserved) Aug 13 00:56:19.158954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:56:19.158971 kernel: Kernel/User page tables isolation: enabled Aug 13 00:56:19.158988 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:56:19.159003 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:56:19.159019 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:56:19.159037 kernel: rcu: RCU event tracing is enabled. Aug 13 00:56:19.159054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:56:19.159076 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:56:19.159106 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:56:19.159125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:56:19.159146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:56:19.159164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:56:19.159183 kernel: Console: colour dummy device 80x25 Aug 13 00:56:19.159200 kernel: printk: console [ttyS0] enabled Aug 13 00:56:19.159219 kernel: ACPI: Core revision 20210730 Aug 13 00:56:19.159236 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:56:19.159255 kernel: x2apic enabled Aug 13 00:56:19.159277 kernel: Switched APIC routing to physical x2apic. Aug 13 00:56:19.159295 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 00:56:19.159313 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 00:56:19.159331 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 00:56:19.159349 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 00:56:19.159367 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 00:56:19.159385 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:56:19.159407 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 00:56:19.159425 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 00:56:19.159443 kernel: Spectre V2 : Mitigation: IBRS Aug 13 00:56:19.159460 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:56:19.159477 kernel: RETBleed: Mitigation: IBRS Aug 13 00:56:19.159492 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:56:19.159506 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Aug 13 00:56:19.159523 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:56:19.159542 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:56:19.159853 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:56:19.159875 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:56:19.159893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:56:19.159911 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:56:19.159928 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:56:19.159946 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:56:19.159964 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:56:19.159982 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:56:19.160000 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:56:19.160023 kernel: LSM: Security Framework initializing Aug 13 00:56:19.160041 kernel: SELinux: Initializing. Aug 13 00:56:19.160059 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:56:19.160077 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:56:19.160095 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 00:56:19.160114 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 00:56:19.160132 kernel: signal: max sigframe size: 1776 Aug 13 00:56:19.160149 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:56:19.160164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:56:19.160185 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:56:19.160203 kernel: x86: Booting SMP configuration: Aug 13 00:56:19.160219 kernel: .... node #0, CPUs: #1 Aug 13 00:56:19.160237 kernel: kvm-clock: cpu 1, msr 2419e041, secondary cpu clock Aug 13 00:56:19.160256 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 00:56:19.160276 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:56:19.160294 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:56:19.160311 kernel: smpboot: Max logical packages: 1 Aug 13 00:56:19.160333 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 00:56:19.160351 kernel: devtmpfs: initialized Aug 13 00:56:19.160369 kernel: x86/mm: Memory block size: 128MB Aug 13 00:56:19.160387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 00:56:19.160406 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:56:19.160424 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:56:19.160442 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:56:19.160458 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:56:19.160476 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:56:19.160499 kernel: audit: type=2000 audit(1755046577.355:1): state=initialized audit_enabled=0 res=1 Aug 13 00:56:19.160517 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:56:19.160534 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:56:19.160552 kernel: cpuidle: using governor menu Aug 13 00:56:19.160595 kernel: ACPI: bus type PCI registered Aug 13 00:56:19.160613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:56:19.160632 kernel: dca service started, version 1.12.1 Aug 13 00:56:19.160650 kernel: PCI: Using configuration type 1 for base access Aug 13 00:56:19.160676 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:56:19.160699 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:56:19.160717 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:56:19.160735 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:56:19.160753 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:56:19.160770 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:56:19.160788 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:56:19.160806 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:56:19.160824 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:56:19.160842 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 00:56:19.160864 kernel: ACPI: Interpreter enabled Aug 13 00:56:19.160883 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:56:19.160900 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:56:19.160918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:56:19.160937 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 00:56:19.160955 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:56:19.161220 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:56:19.161549 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 00:56:19.161596 kernel: PCI host bridge to bus 0000:00 Aug 13 00:56:19.161778 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:56:19.161935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:56:19.162090 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:56:19.162243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 00:56:19.162394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:56:19.162622 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 00:56:19.162986 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 00:56:19.163174 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 00:56:19.163347 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 00:56:19.163530 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 00:56:19.163733 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 00:56:19.163914 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 00:56:19.164113 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:56:19.164445 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 00:56:19.164640 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 00:56:19.164829 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:56:19.165004 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 00:56:19.165179 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 00:56:19.165208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:56:19.165227 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:56:19.165246 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:56:19.165264 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:56:19.165282 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:56:19.165301 kernel: iommu: Default domain type: Translated Aug 13 00:56:19.165319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:56:19.165337 kernel: vgaarb: loaded Aug 13 00:56:19.165356 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:56:19.165374 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:56:19.165396 kernel: PTP clock support registered Aug 13 00:56:19.165414 kernel: Registered efivars operations Aug 13 00:56:19.165432 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:56:19.165450 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:56:19.165468 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 00:56:19.165486 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 00:56:19.165503 kernel: e820: reserve RAM buffer [mem 0xbd27c000-0xbfffffff] Aug 13 00:56:19.165521 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 00:56:19.165542 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 00:56:19.165586 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:56:19.165605 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:56:19.165623 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:56:19.165641 kernel: pnp: PnP ACPI init Aug 13 00:56:19.165658 kernel: pnp: PnP ACPI: found 7 devices Aug 13 00:56:19.165685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:56:19.165703 kernel: NET: Registered PF_INET protocol family Aug 13 00:56:19.165721 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:56:19.165742 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:56:19.165757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:56:19.165773 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:56:19.165788 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 00:56:19.165806 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:56:19.165824 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:56:19.165842 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:56:19.165860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:56:19.165878 kernel: NET: Registered PF_XDP protocol family Aug 13 00:56:19.166067 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:56:19.166226 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:56:19.166378 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:56:19.166530 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 00:56:19.174783 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:56:19.174833 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:56:19.174853 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:56:19.174881 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 00:56:19.174900 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:56:19.174918 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 00:56:19.174936 kernel: clocksource: Switched to clocksource tsc Aug 13 00:56:19.174954 kernel: Initialise system trusted keyrings Aug 13 00:56:19.174972 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:56:19.174991 kernel: Key type asymmetric registered Aug 13 00:56:19.175009 kernel: Asymmetric key parser 'x509' registered Aug 13 00:56:19.175027 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:56:19.175049 kernel: io scheduler mq-deadline registered Aug 13 00:56:19.175068 kernel: io scheduler kyber registered Aug 13 00:56:19.175086 kernel: io scheduler bfq registered Aug 13 00:56:19.175104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:56:19.175123 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:56:19.175308 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 00:56:19.175333 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 00:56:19.175507 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 00:56:19.175531 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:56:19.175730 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 00:56:19.175756 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:56:19.175775 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:56:19.175793 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:56:19.175812 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 00:56:19.175829 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 00:56:19.176091 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 00:56:19.176120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:56:19.176145 kernel: i8042: Warning: Keylock active Aug 13 00:56:19.176163 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:56:19.176180 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:56:19.176353 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 00:56:19.176505 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 00:56:19.176690 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T00:56:18 UTC (1755046578) Aug 13 00:56:19.176838 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 00:56:19.176860 kernel: intel_pstate: CPU model not supported Aug 13 00:56:19.176882 kernel: pstore: Registered efi as persistent store backend Aug 13 00:56:19.176927 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:56:19.176946 kernel: Segment Routing with IPv6 Aug 13 00:56:19.176962 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:56:19.176979 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:56:19.176997 kernel: Key type dns_resolver registered Aug 13 00:56:19.177013 kernel: IPI shorthand broadcast: enabled Aug 13 00:56:19.177028 kernel: sched_clock: Marking stable (797134405, 155238845)->(1060028825, -107655575) Aug 13 00:56:19.177046 kernel: registered taskstats version 1 Aug 13 00:56:19.177068 kernel: Loading compiled-in X.509 certificates Aug 13 00:56:19.177084 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:56:19.177101 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:56:19.177118 kernel: Key type .fscrypt registered Aug 13 00:56:19.177132 kernel: Key type fscrypt-provisioning registered Aug 13 00:56:19.177157 kernel: pstore: Using crash dump compression: deflate Aug 13 00:56:19.177173 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:56:19.177187 kernel: ima: No architecture policies found Aug 13 00:56:19.177201 kernel: clk: Disabling unused clocks Aug 13 00:56:19.177222 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:56:19.177239 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:56:19.177253 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:56:19.177268 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:56:19.177284 kernel: Run /init as init process Aug 13 00:56:19.177300 kernel: with arguments: Aug 13 00:56:19.177317 kernel: /init Aug 13 00:56:19.177334 kernel: with environment: Aug 13 00:56:19.177350 kernel: HOME=/ Aug 13 00:56:19.177371 kernel: TERM=linux Aug 13 00:56:19.177387 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:56:19.177409 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:56:19.177431 systemd[1]: Detected virtualization kvm. Aug 13 00:56:19.177451 systemd[1]: Detected architecture x86-64. Aug 13 00:56:19.177469 systemd[1]: Running in initrd. Aug 13 00:56:19.177487 systemd[1]: No hostname configured, using default hostname. Aug 13 00:56:19.177510 systemd[1]: Hostname set to . Aug 13 00:56:19.177529 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:56:19.177548 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:56:19.177610 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:56:19.177629 systemd[1]: Reached target cryptsetup.target. Aug 13 00:56:19.177648 systemd[1]: Reached target paths.target. Aug 13 00:56:19.177675 systemd[1]: Reached target slices.target. Aug 13 00:56:19.177693 systemd[1]: Reached target swap.target. Aug 13 00:56:19.177716 systemd[1]: Reached target timers.target. Aug 13 00:56:19.177736 systemd[1]: Listening on iscsid.socket. Aug 13 00:56:19.177755 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:56:19.177774 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:56:19.177793 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:56:19.177811 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:56:19.177830 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:56:19.177849 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:56:19.177872 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:56:19.177891 systemd[1]: Reached target sockets.target. Aug 13 00:56:19.177929 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:56:19.177952 systemd[1]: Finished network-cleanup.service. Aug 13 00:56:19.177971 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:56:19.177991 systemd[1]: Starting systemd-journald.service... Aug 13 00:56:19.178014 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:56:19.178034 systemd[1]: Starting systemd-resolved.service... Aug 13 00:56:19.178053 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:56:19.178078 systemd-journald[190]: Journal started Aug 13 00:56:19.178174 systemd-journald[190]: Runtime Journal (/run/log/journal/c2a94d98a7fdd0910c3e5e216c7fab8b) is 8.0M, max 148.8M, 140.8M free. Aug 13 00:56:19.177196 systemd-modules-load[191]: Inserted module 'overlay' Aug 13 00:56:19.205592 systemd[1]: Started systemd-journald.service. Aug 13 00:56:19.221356 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:56:19.307776 kernel: audit: type=1130 audit(1755046579.219:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.307819 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:56:19.307845 kernel: audit: type=1130 audit(1755046579.274:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.307868 kernel: Bridge firewalling registered Aug 13 00:56:19.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.250857 systemd-resolved[192]: Positive Trust Anchors: Aug 13 00:56:19.250884 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:56:19.250951 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:56:19.489932 kernel: SCSI subsystem initialized Aug 13 00:56:19.489983 kernel: audit: type=1130 audit(1755046579.337:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.490001 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:56:19.490016 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:56:19.490031 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:56:19.490046 kernel: audit: type=1130 audit(1755046579.394:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.490062 kernel: audit: type=1130 audit(1755046579.416:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.256865 systemd-resolved[192]: Defaulting to hostname 'linux'. Aug 13 00:56:19.276370 systemd[1]: Started systemd-resolved.service. Aug 13 00:56:19.303480 systemd-modules-load[191]: Inserted module 'br_netfilter' Aug 13 00:56:19.339056 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:56:19.396350 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:56:19.417889 systemd-modules-load[191]: Inserted module 'dm_multipath' Aug 13 00:56:19.418106 systemd[1]: Reached target nss-lookup.target. Aug 13 00:56:19.498869 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:56:19.508263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:56:19.508833 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:56:19.515291 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:56:19.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.522479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:56:19.549797 kernel: audit: type=1130 audit(1755046579.507:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.558250 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:56:19.613779 kernel: audit: type=1130 audit(1755046579.556:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.613824 kernel: audit: type=1130 audit(1755046579.577:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.579806 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:56:19.649763 kernel: audit: type=1130 audit(1755046579.621:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.624582 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:56:19.657844 dracut-cmdline[210]: dracut-dracut-053 Aug 13 00:56:19.657844 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 00:56:19.657844 dracut-cmdline[210]: BEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:56:19.736615 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:56:19.761614 kernel: iscsi: registered transport (tcp) Aug 13 00:56:19.799018 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:56:19.799108 kernel: QLogic iSCSI HBA Driver Aug 13 00:56:19.845538 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:56:19.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.847171 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:56:19.911629 kernel: raid6: avx2x4 gen() 17718 MB/s Aug 13 00:56:19.932613 kernel: raid6: avx2x4 xor() 7629 MB/s Aug 13 00:56:19.953606 kernel: raid6: avx2x2 gen() 17693 MB/s Aug 13 00:56:19.974608 kernel: raid6: avx2x2 xor() 18561 MB/s Aug 13 00:56:19.995611 kernel: raid6: avx2x1 gen() 13942 MB/s Aug 13 00:56:20.016634 kernel: raid6: avx2x1 xor() 16156 MB/s Aug 13 00:56:20.037624 kernel: raid6: sse2x4 gen() 10819 MB/s Aug 13 00:56:20.058659 kernel: raid6: sse2x4 xor() 6624 MB/s Aug 13 00:56:20.079646 kernel: raid6: sse2x2 gen() 11397 MB/s Aug 13 00:56:20.100618 kernel: raid6: sse2x2 xor() 7345 MB/s Aug 13 00:56:20.121607 kernel: raid6: sse2x1 gen() 10330 MB/s Aug 13 00:56:20.147820 kernel: raid6: sse2x1 xor() 5140 MB/s Aug 13 00:56:20.147895 kernel: raid6: using algorithm avx2x4 gen() 17718 MB/s Aug 13 00:56:20.147919 kernel: raid6: .... xor() 7629 MB/s, rmw enabled Aug 13 00:56:20.152935 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:56:20.178612 kernel: xor: automatically using best checksumming function avx Aug 13 00:56:20.298606 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:56:20.311637 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:56:20.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:20.318000 audit: BPF prog-id=7 op=LOAD Aug 13 00:56:20.318000 audit: BPF prog-id=8 op=LOAD Aug 13 00:56:20.321218 systemd[1]: Starting systemd-udevd.service... Aug 13 00:56:20.339920 systemd-udevd[387]: Using default interface naming scheme 'v252'. Aug 13 00:56:20.347509 systemd[1]: Started systemd-udevd.service. Aug 13 00:56:20.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:20.369012 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:56:20.387776 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 00:56:20.427700 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:56:20.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:20.428888 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:56:20.499386 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:56:20.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:20.588596 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:56:20.706603 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:56:20.728593 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 00:56:20.741957 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:56:20.749594 kernel: AES CTR mode by8 optimization enabled Aug 13 00:56:20.807476 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 00:56:20.870911 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 00:56:20.871103 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 00:56:20.871246 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 00:56:20.871384 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:56:20.871550 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:56:20.871609 kernel: GPT:17805311 != 25165823 Aug 13 00:56:20.871629 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:56:20.871648 kernel: GPT:17805311 != 25165823 Aug 13 00:56:20.871685 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:56:20.871706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:56:20.871729 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 00:56:20.936254 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:56:20.953926 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Aug 13 00:56:20.963767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:56:20.979266 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:56:20.998864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:56:21.022016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:56:21.032878 systemd[1]: Starting disk-uuid.service... Aug 13 00:56:21.047834 disk-uuid[514]: Primary Header is updated. Aug 13 00:56:21.047834 disk-uuid[514]: Secondary Entries is updated. Aug 13 00:56:21.047834 disk-uuid[514]: Secondary Header is updated. Aug 13 00:56:21.081625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:56:21.081685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:56:21.111598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:56:22.123572 disk-uuid[515]: The operation has completed successfully. Aug 13 00:56:22.132776 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:56:22.198509 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:56:22.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.198704 systemd[1]: Finished disk-uuid.service. Aug 13 00:56:22.224423 systemd[1]: Starting verity-setup.service... Aug 13 00:56:22.254616 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:56:22.344174 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:56:22.347021 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:56:22.360243 systemd[1]: Finished verity-setup.service. Aug 13 00:56:22.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.454590 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:56:22.455078 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:56:22.455504 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:56:22.515127 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:56:22.515172 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:56:22.515196 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:56:22.515219 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:56:22.456501 systemd[1]: Starting ignition-setup.service... Aug 13 00:56:22.478550 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:56:22.536065 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:56:22.551012 systemd[1]: Finished ignition-setup.service. Aug 13 00:56:22.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.552398 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:56:22.620383 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:56:22.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.629000 audit: BPF prog-id=9 op=LOAD Aug 13 00:56:22.632043 systemd[1]: Starting systemd-networkd.service... Aug 13 00:56:22.668001 systemd-networkd[689]: lo: Link UP Aug 13 00:56:22.668014 systemd-networkd[689]: lo: Gained carrier Aug 13 00:56:22.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.669117 systemd-networkd[689]: Enumeration completed Aug 13 00:56:22.669295 systemd[1]: Started systemd-networkd.service. Aug 13 00:56:22.669803 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:56:22.672202 systemd-networkd[689]: eth0: Link UP Aug 13 00:56:22.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.672210 systemd-networkd[689]: eth0: Gained carrier Aug 13 00:56:22.752737 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:56:22.752737 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:56:22.752737 iscsid[700]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:56:22.752737 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:56:22.752737 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:56:22.752737 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:56:22.752737 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:56:22.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.676032 systemd[1]: Reached target network.target. Aug 13 00:56:22.883980 ignition[621]: Ignition 2.14.0 Aug 13 00:56:22.679730 systemd-networkd[689]: eth0: DHCPv4 address 10.128.0.76/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 00:56:22.883998 ignition[621]: Stage: fetch-offline Aug 13 00:56:22.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.698980 systemd[1]: Starting iscsiuio.service... Aug 13 00:56:22.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.884082 ignition[621]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:22.730883 systemd[1]: Started iscsiuio.service. Aug 13 00:56:22.884137 ignition[621]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:22.740151 systemd[1]: Starting iscsid.service... Aug 13 00:56:22.904405 ignition[621]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:22.771931 systemd[1]: Started iscsid.service. Aug 13 00:56:23.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.904671 ignition[621]: parsed url from cmdline: "" Aug 13 00:56:22.807474 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:56:22.904679 ignition[621]: no config URL provided Aug 13 00:56:22.836371 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:56:22.904690 ignition[621]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:56:23.101743 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:56:23.101787 kernel: audit: type=1130 audit(1755046583.064:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.852130 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:56:22.904707 ignition[621]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:56:23.155771 kernel: audit: type=1130 audit(1755046583.125:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.871965 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:56:22.904719 ignition[621]: failed to fetch config: resource requires networking Aug 13 00:56:22.900031 systemd[1]: Reached target remote-fs.target. Aug 13 00:56:22.905042 ignition[621]: Ignition finished successfully Aug 13 00:56:22.909973 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:56:22.976797 ignition[714]: Ignition 2.14.0 Aug 13 00:56:22.929350 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:56:22.976809 ignition[714]: Stage: fetch Aug 13 00:56:22.942359 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:56:22.976944 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:22.964175 systemd[1]: Starting ignition-fetch.service... Aug 13 00:56:22.976976 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:23.002360 unknown[714]: fetched base config from "system" Aug 13 00:56:22.985209 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:23.002374 unknown[714]: fetched base config from "system" Aug 13 00:56:22.985435 ignition[714]: parsed url from cmdline: "" Aug 13 00:56:23.002382 unknown[714]: fetched user config from "gcp" Aug 13 00:56:22.985441 ignition[714]: no config URL provided Aug 13 00:56:23.005030 systemd[1]: Finished ignition-fetch.service. Aug 13 00:56:22.985448 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:56:23.022454 systemd[1]: Starting ignition-kargs.service... Aug 13 00:56:22.985460 ignition[714]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:56:23.051253 systemd[1]: Finished ignition-kargs.service. Aug 13 00:56:22.985496 ignition[714]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 00:56:23.067524 systemd[1]: Starting ignition-disks.service... Aug 13 00:56:22.994704 ignition[714]: GET result: OK Aug 13 00:56:23.113069 systemd[1]: Finished ignition-disks.service. Aug 13 00:56:22.994820 ignition[714]: parsing config with SHA512: 9ae528e106fd0c3e204cb32821bc254a781c26e3a0ec470f4a8d14ebe504d33cec1b923ad219bf4e0259ef6d0508a5e18192bd5714897d6a436b72199d439252 Aug 13 00:56:23.127002 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:56:23.003141 ignition[714]: fetch: fetch complete Aug 13 00:56:23.164898 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:56:23.003149 ignition[714]: fetch: fetch passed Aug 13 00:56:23.189881 systemd[1]: Reached target local-fs.target. Aug 13 00:56:23.003205 ignition[714]: Ignition finished successfully Aug 13 00:56:23.204925 systemd[1]: Reached target sysinit.target. Aug 13 00:56:23.035921 ignition[720]: Ignition 2.14.0 Aug 13 00:56:23.219911 systemd[1]: Reached target basic.target. Aug 13 00:56:23.035931 ignition[720]: Stage: kargs Aug 13 00:56:23.236203 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:56:23.036067 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:23.036098 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:23.043367 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:23.045166 ignition[720]: kargs: kargs passed Aug 13 00:56:23.045226 ignition[720]: Ignition finished successfully Aug 13 00:56:23.101982 ignition[726]: Ignition 2.14.0 Aug 13 00:56:23.101996 ignition[726]: Stage: disks Aug 13 00:56:23.102164 ignition[726]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:23.102197 ignition[726]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:23.110464 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:23.111983 ignition[726]: disks: disks passed Aug 13 00:56:23.112041 ignition[726]: Ignition finished successfully Aug 13 00:56:23.278585 systemd-fsck[734]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 00:56:23.489579 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:56:23.524907 kernel: audit: type=1130 audit(1755046583.488:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.491063 systemd[1]: Mounting sysroot.mount... Aug 13 00:56:23.547871 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:56:23.541919 systemd[1]: Mounted sysroot.mount. Aug 13 00:56:23.555012 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:56:23.573223 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:56:23.579489 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:56:23.579548 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:56:23.579611 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:56:23.674769 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (740) Aug 13 00:56:23.674813 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:56:23.674837 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:56:23.674858 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:56:23.591233 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:56:23.695756 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:56:23.624438 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:56:23.704841 initrd-setup-root[745]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:56:23.640232 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:56:23.731747 initrd-setup-root[753]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:56:23.699017 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:56:23.750732 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:56:23.761731 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:56:23.771810 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:56:23.809786 kernel: audit: type=1130 audit(1755046583.780:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.783281 systemd[1]: Starting ignition-mount.service... Aug 13 00:56:23.818008 systemd[1]: Starting sysroot-boot.service... Aug 13 00:56:23.831995 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:56:23.832137 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:56:23.857723 ignition[805]: INFO : Ignition 2.14.0 Aug 13 00:56:23.857723 ignition[805]: INFO : Stage: mount Aug 13 00:56:23.857723 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:23.857723 ignition[805]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:23.934778 kernel: audit: type=1130 audit(1755046583.863:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.935051 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:23.935051 ignition[805]: INFO : mount: mount passed Aug 13 00:56:23.935051 ignition[805]: INFO : Ignition finished successfully Aug 13 00:56:24.033743 kernel: audit: type=1130 audit(1755046583.942:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:24.033793 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Aug 13 00:56:24.033819 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:56:24.033844 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:56:24.033867 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:56:24.033900 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:56:23.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.863722 systemd[1]: Finished ignition-mount.service. Aug 13 00:56:23.866317 systemd[1]: Starting ignition-files.service... Aug 13 00:56:23.916858 systemd[1]: Finished sysroot-boot.service. Aug 13 00:56:23.949760 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:56:24.073743 ignition[834]: INFO : Ignition 2.14.0 Aug 13 00:56:24.073743 ignition[834]: INFO : Stage: files Aug 13 00:56:24.073743 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:24.073743 ignition[834]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:24.073743 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:24.035011 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:56:24.141771 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:56:24.141771 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:56:24.141771 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:56:24.141771 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:56:24.141771 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:56:24.141771 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:56:24.141771 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:56:24.141771 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:56:24.141771 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:56:24.141771 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:56:24.094293 unknown[834]: wrote ssh authorized keys file for user: core Aug 13 00:56:24.292755 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:56:24.585925 systemd-networkd[689]: eth0: Gained IPv6LL Aug 13 00:56:24.613803 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1999294790" Aug 13 00:56:24.630780 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1999294790": device or resource busy Aug 13 00:56:24.630780 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1999294790", trying btrfs: device or resource busy Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1999294790" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1999294790" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem1999294790" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem1999294790" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Aug 13 00:56:24.630780 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114981796" Aug 13 00:56:24.862840 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114981796": device or resource busy Aug 13 00:56:24.862840 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem114981796", trying btrfs: device or resource busy Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114981796" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114981796" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem114981796" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem114981796" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:56:24.862840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:56:24.634111 systemd[1]: mnt-oem1999294790.mount: Deactivated successfully. Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181048572" Aug 13 00:56:25.123827 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181048572": device or resource busy Aug 13 00:56:25.123827 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1181048572", trying btrfs: device or resource busy Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181048572" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181048572" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem1181048572" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem1181048572" Aug 13 00:56:25.123827 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Aug 13 00:56:24.655118 systemd[1]: mnt-oem114981796.mount: Deactivated successfully. Aug 13 00:56:25.370759 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:56:25.370759 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:56:25.370759 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Aug 13 00:56:25.974930 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223819189" Aug 13 00:56:25.993757 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223819189": device or resource busy Aug 13 00:56:25.993757 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3223819189", trying btrfs: device or resource busy Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223819189" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223819189" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem3223819189" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem3223819189" Aug 13 00:56:25.993757 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Aug 13 00:56:25.993757 ignition[834]: INFO : files: op(1f): [started] processing unit "containerd.service" Aug 13 00:56:26.424790 kernel: audit: type=1130 audit(1755046586.026:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.424851 kernel: audit: type=1130 audit(1755046586.135:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.424879 kernel: audit: type=1130 audit(1755046586.174:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.424917 kernel: audit: type=1131 audit(1755046586.174:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.002245 systemd[1]: mnt-oem3223819189.mount: Deactivated successfully. Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(1f): [finished] processing unit "containerd.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:56:26.442843 ignition[834]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:56:26.442843 ignition[834]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:56:26.442843 ignition[834]: INFO : files: files passed Aug 13 00:56:26.442843 ignition[834]: INFO : Ignition finished successfully Aug 13 00:56:26.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.014158 systemd[1]: Finished ignition-files.service. Aug 13 00:56:26.781012 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:56:26.039380 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:56:26.078766 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:56:26.079979 systemd[1]: Starting ignition-quench.service... Aug 13 00:56:26.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.106249 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:56:26.137326 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:56:26.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.137502 systemd[1]: Finished ignition-quench.service. Aug 13 00:56:26.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.176236 systemd[1]: Reached target ignition-complete.target. Aug 13 00:56:26.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.241148 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:56:26.946830 ignition[872]: INFO : Ignition 2.14.0 Aug 13 00:56:26.946830 ignition[872]: INFO : Stage: umount Aug 13 00:56:26.946830 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:56:26.946830 ignition[872]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 00:56:26.282245 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:56:27.020990 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:56:27.020990 ignition[872]: INFO : umount: umount passed Aug 13 00:56:27.020990 ignition[872]: INFO : Ignition finished successfully Aug 13 00:56:27.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.282378 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:56:27.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.283158 systemd[1]: Reached target initrd-fs.target. Aug 13 00:56:27.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.313802 systemd[1]: Reached target initrd.target. Aug 13 00:56:27.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.331893 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:56:27.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.333205 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:56:27.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.364235 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:56:26.385396 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:56:27.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.423348 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:56:26.434254 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:56:26.452240 systemd[1]: Stopped target timers.target. Aug 13 00:56:26.498198 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:56:26.498466 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:56:26.527505 systemd[1]: Stopped target initrd.target. Aug 13 00:56:26.567170 systemd[1]: Stopped target basic.target. Aug 13 00:56:27.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.598259 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:56:27.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.630231 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:56:26.663309 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:56:26.677354 systemd[1]: Stopped target remote-fs.target. Aug 13 00:56:27.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.717190 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:56:27.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.753236 systemd[1]: Stopped target sysinit.target. Aug 13 00:56:27.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.383000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:56:26.774219 systemd[1]: Stopped target local-fs.target. Aug 13 00:56:26.790135 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:56:26.813219 systemd[1]: Stopped target swap.target. Aug 13 00:56:27.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.832158 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:56:27.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.832440 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:56:27.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.859410 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:56:26.875177 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:56:27.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.875455 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:56:26.893304 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:56:26.893526 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:56:27.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.912322 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:56:27.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.912537 systemd[1]: Stopped ignition-files.service. Aug 13 00:56:27.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.930434 systemd[1]: Stopping ignition-mount.service... Aug 13 00:56:26.969245 systemd[1]: Stopping iscsiuio.service... Aug 13 00:56:26.982894 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:56:27.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.011833 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:56:27.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.012192 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:56:27.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.030339 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:56:27.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.030580 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:56:27.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:27.055582 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:56:27.056922 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:56:27.717000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:56:27.717000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:56:27.718000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:56:27.718000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:56:27.718000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:56:27.057129 systemd[1]: Stopped iscsiuio.service. Aug 13 00:56:27.753864 systemd-journald[190]: Failed to send stream file descriptor to service manager: Connection refused Aug 13 00:56:27.753978 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Aug 13 00:56:27.066763 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:56:27.762800 iscsid[700]: iscsid shutting down. Aug 13 00:56:27.066899 systemd[1]: Stopped ignition-mount.service. Aug 13 00:56:27.082742 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:56:27.082887 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:56:27.098105 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:56:27.098469 systemd[1]: Stopped ignition-disks.service. Aug 13 00:56:27.111945 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:56:27.112069 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:56:27.126985 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:56:27.127083 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:56:27.146951 systemd[1]: Stopped target network.target. Aug 13 00:56:27.163846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:56:27.164012 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:56:27.180004 systemd[1]: Stopped target paths.target. Aug 13 00:56:27.193777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:56:27.197740 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:56:27.208815 systemd[1]: Stopped target slices.target. Aug 13 00:56:27.222828 systemd[1]: Stopped target sockets.target. Aug 13 00:56:27.235946 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:56:27.236021 systemd[1]: Closed iscsid.socket. Aug 13 00:56:27.252976 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:56:27.253078 systemd[1]: Closed iscsiuio.socket. Aug 13 00:56:27.266889 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:56:27.267021 systemd[1]: Stopped ignition-setup.service. Aug 13 00:56:27.283994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:56:27.284135 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:56:27.301327 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:56:27.304699 systemd-networkd[689]: eth0: DHCPv6 lease lost Aug 13 00:56:27.317086 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:56:27.333477 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:56:27.333661 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:56:27.351738 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:56:27.351887 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:56:27.367792 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:56:27.367924 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:56:27.385406 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:56:27.385460 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:56:27.401205 systemd[1]: Stopping network-cleanup.service... Aug 13 00:56:27.413733 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:56:27.413863 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:56:27.429902 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:56:27.429998 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:56:27.445040 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:56:27.445152 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:56:27.461101 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:56:27.477541 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:56:27.478262 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:56:27.478422 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:56:27.492516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:56:27.492638 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:56:27.508049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:56:27.508116 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:56:27.523002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:56:27.523091 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:56:27.541090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:56:27.541166 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:56:27.559134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:56:27.559218 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:56:27.578148 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:56:27.598750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:56:27.598918 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:56:27.614068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:56:27.614138 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:56:27.629869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:56:27.629975 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:56:27.650283 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:56:27.651011 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:56:27.651140 systemd[1]: Stopped network-cleanup.service. Aug 13 00:56:27.666233 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:56:27.666351 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:56:27.685132 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:56:27.701991 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:56:27.716507 systemd[1]: Switching root. Aug 13 00:56:27.773043 systemd-journald[190]: Journal stopped Aug 13 00:56:32.819263 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:56:32.819418 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:56:32.819456 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:56:32.819480 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:56:32.819502 kernel: SELinux: policy capability open_perms=1 Aug 13 00:56:32.819524 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:56:32.819549 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:56:32.819596 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:56:32.819633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:56:32.819657 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:56:32.819679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:56:32.819701 kernel: kauditd_printk_skb: 44 callbacks suppressed Aug 13 00:56:32.819726 kernel: audit: type=1403 audit(1755046588.232:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:56:32.819773 systemd[1]: Successfully loaded SELinux policy in 123.873ms. Aug 13 00:56:32.819820 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.809ms. Aug 13 00:56:32.819858 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:56:32.819884 systemd[1]: Detected virtualization kvm. Aug 13 00:56:32.819913 systemd[1]: Detected architecture x86-64. Aug 13 00:56:32.819937 systemd[1]: Detected first boot. Aug 13 00:56:32.819963 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:56:32.819988 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:56:32.820019 kernel: audit: type=1400 audit(1755046588.612:86): avc: denied { associate } for pid=923 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:56:32.820044 kernel: audit: type=1300 audit(1755046588.612:86): arch=c000003e syscall=188 success=yes exit=0 a0=c0001196c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=906 pid=923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:32.820068 kernel: audit: type=1327 audit(1755046588.612:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:56:32.820097 kernel: audit: type=1400 audit(1755046588.623:87): avc: denied { associate } for pid=923 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:56:32.820121 kernel: audit: type=1300 audit(1755046588.623:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001197a9 a2=1ed a3=0 items=2 ppid=906 pid=923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:32.820146 kernel: audit: type=1307 audit(1755046588.623:87): cwd="/" Aug 13 00:56:32.820168 kernel: audit: type=1302 audit(1755046588.623:87): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:32.820191 kernel: audit: type=1302 audit(1755046588.623:87): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:32.820215 kernel: audit: type=1327 audit(1755046588.623:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:56:32.820238 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:56:32.820267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:56:32.820293 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:56:32.820319 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:56:32.820351 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:56:32.820375 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:56:32.820399 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:56:32.820428 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:56:32.820459 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 00:56:32.820484 systemd[1]: Created slice system-getty.slice. Aug 13 00:56:32.820511 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:56:32.820536 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:56:32.820571 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:56:32.820596 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:56:32.820631 systemd[1]: Created slice user.slice. Aug 13 00:56:32.820655 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:56:32.820679 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:56:32.820703 systemd[1]: Set up automount boot.automount. Aug 13 00:56:32.820727 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:56:32.820758 systemd[1]: Reached target integritysetup.target. Aug 13 00:56:32.820780 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:56:32.820809 systemd[1]: Reached target remote-fs.target. Aug 13 00:56:32.820834 systemd[1]: Reached target slices.target. Aug 13 00:56:32.820859 systemd[1]: Reached target swap.target. Aug 13 00:56:32.820887 systemd[1]: Reached target torcx.target. Aug 13 00:56:32.820912 systemd[1]: Reached target veritysetup.target. Aug 13 00:56:32.820936 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:56:32.820961 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:56:32.820991 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:56:32.821014 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:56:32.821039 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:56:32.821065 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:56:32.821088 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:56:32.821113 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:56:32.821141 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:56:32.821170 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:56:32.821195 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:56:32.821219 systemd[1]: Mounting media.mount... Aug 13 00:56:32.821243 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:32.821273 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:56:32.821296 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:56:32.821320 systemd[1]: Mounting tmp.mount... Aug 13 00:56:32.821344 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:56:32.821376 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:56:32.821400 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:56:32.821424 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:56:32.821449 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:56:32.821473 systemd[1]: Starting modprobe@drm.service... Aug 13 00:56:32.821497 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:56:32.821521 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:56:32.821545 systemd[1]: Starting modprobe@loop.service... Aug 13 00:56:32.821584 kernel: fuse: init (API version 7.34) Aug 13 00:56:32.821615 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:56:32.821640 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:56:32.821662 kernel: loop: module loaded Aug 13 00:56:32.821685 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:56:32.821710 systemd[1]: Starting systemd-journald.service... Aug 13 00:56:32.821745 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:56:32.821769 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:56:32.821801 systemd-journald[1035]: Journal started Aug 13 00:56:32.821915 systemd-journald[1035]: Runtime Journal (/run/log/journal/c2a94d98a7fdd0910c3e5e216c7fab8b) is 8.0M, max 148.8M, 140.8M free. Aug 13 00:56:32.370000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:56:32.370000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:56:32.813000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:56:32.813000 audit[1035]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcb0b47530 a2=4000 a3=7ffcb0b475cc items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:32.813000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:56:32.839591 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:56:32.861653 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:56:32.880836 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:32.890623 systemd[1]: Started systemd-journald.service. Aug 13 00:56:32.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.901297 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:56:32.909065 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:56:32.917059 systemd[1]: Mounted media.mount. Aug 13 00:56:32.925037 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:56:32.934069 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:56:32.943007 systemd[1]: Mounted tmp.mount. Aug 13 00:56:32.950687 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:56:32.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.960636 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:56:32.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.969358 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:56:32.969726 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:56:32.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.979758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:56:32.980074 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:56:32.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.989428 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:56:32.989769 systemd[1]: Finished modprobe@drm.service. Aug 13 00:56:32.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.999382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:56:32.999757 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:56:33.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.009381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:56:33.009742 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:56:33.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.019403 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:56:33.019796 systemd[1]: Finished modprobe@loop.service. Aug 13 00:56:33.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.029467 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:56:33.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.039400 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:56:33.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.048406 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:56:33.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.058451 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:56:33.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.067622 systemd[1]: Reached target network-pre.target. Aug 13 00:56:33.078033 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:56:33.089235 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:56:33.096785 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:56:33.101409 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:56:33.111609 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:56:33.119830 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:56:33.123517 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:56:33.126861 systemd-journald[1035]: Time spent on flushing to /var/log/journal/c2a94d98a7fdd0910c3e5e216c7fab8b is 77.750ms for 1097 entries. Aug 13 00:56:33.126861 systemd-journald[1035]: System Journal (/var/log/journal/c2a94d98a7fdd0910c3e5e216c7fab8b) is 8.0M, max 584.8M, 576.8M free. Aug 13 00:56:33.261749 systemd-journald[1035]: Received client request to flush runtime journal. Aug 13 00:56:33.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.138916 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:56:33.141974 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:56:33.152306 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:56:33.162342 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:56:33.262873 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:56:33.174417 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:56:33.183011 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:56:33.192386 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:56:33.269730 kernel: kauditd_printk_skb: 26 callbacks suppressed Aug 13 00:56:33.269820 kernel: audit: type=1130 audit(1755046593.262:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.204630 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:56:33.222886 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:56:33.255118 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:56:33.266676 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:56:33.303523 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:56:33.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.335799 kernel: audit: type=1130 audit(1755046593.310:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.379838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:56:33.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.414639 kernel: audit: type=1130 audit(1755046593.388:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.987970 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:56:33.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:33.999704 systemd[1]: Starting systemd-udevd.service... Aug 13 00:56:34.022609 kernel: audit: type=1130 audit(1755046593.995:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.046425 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Aug 13 00:56:34.108882 systemd[1]: Started systemd-udevd.service. Aug 13 00:56:34.146418 kernel: audit: type=1130 audit(1755046594.116:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.146165 systemd[1]: Starting systemd-networkd.service... Aug 13 00:56:34.163232 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:56:34.221707 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:56:34.266928 systemd[1]: Started systemd-userdbd.service. Aug 13 00:56:34.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.300595 kernel: audit: type=1130 audit(1755046594.274:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.381113 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:56:34.397701 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:56:34.407861 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Aug 13 00:56:34.415596 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 00:56:34.421000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:56:34.488637 kernel: audit: type=1400 audit(1755046594.421:118): avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:56:34.500692 systemd-networkd[1087]: lo: Link UP Aug 13 00:56:34.500706 systemd-networkd[1087]: lo: Gained carrier Aug 13 00:56:34.502321 systemd-networkd[1087]: Enumeration completed Aug 13 00:56:34.502688 systemd[1]: Started systemd-networkd.service. Aug 13 00:56:34.504769 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:56:34.507227 systemd-networkd[1087]: eth0: Link UP Aug 13 00:56:34.507400 systemd-networkd[1087]: eth0: Gained carrier Aug 13 00:56:34.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.533631 kernel: audit: type=1130 audit(1755046594.509:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.421000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564904b17990 a1=338ac a2=7fca5e0eebc5 a3=5 items=110 ppid=1069 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:34.545812 systemd-networkd[1087]: eth0: DHCPv4 address 10.128.0.76/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 00:56:34.584592 kernel: audit: type=1300 audit(1755046594.421:118): arch=c000003e syscall=175 success=yes exit=0 a0=564904b17990 a1=338ac a2=7fca5e0eebc5 a3=5 items=110 ppid=1069 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:34.421000 audit: CWD cwd="/" Aug 13 00:56:34.597894 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:56:34.598035 kernel: audit: type=1307 audit(1755046594.421:118): cwd="/" Aug 13 00:56:34.421000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=1 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=2 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=3 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=4 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=5 name=(null) inode=14100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=6 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=7 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=8 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=9 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=10 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=11 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=12 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=13 name=(null) inode=14104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=14 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=15 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=16 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=17 name=(null) inode=14106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=18 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=19 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=20 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=21 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=22 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=23 name=(null) inode=14109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=24 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=25 name=(null) inode=14110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=26 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=27 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=28 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=29 name=(null) inode=14112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=30 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=31 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=32 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=33 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=34 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=35 name=(null) inode=14115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=36 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=37 name=(null) inode=14116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=38 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=39 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=40 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=41 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=42 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=43 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=44 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=45 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=46 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=47 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=48 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=49 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=50 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=51 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=52 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=53 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=55 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=56 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=57 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=58 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=59 name=(null) inode=14127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=60 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=61 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=62 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=63 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=64 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=65 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=66 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=67 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=68 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=69 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=70 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=71 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=72 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=73 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=74 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=75 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=76 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=77 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=78 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=79 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=80 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=81 name=(null) inode=14138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=82 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=83 name=(null) inode=14139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=84 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=85 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=86 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=87 name=(null) inode=14141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=88 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=89 name=(null) inode=14142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=90 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=91 name=(null) inode=14143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=92 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=93 name=(null) inode=14144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=94 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=95 name=(null) inode=14145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=96 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=97 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=98 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=99 name=(null) inode=14147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=100 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=101 name=(null) inode=14148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=102 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=103 name=(null) inode=14149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=104 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=105 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=106 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=107 name=(null) inode=14151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PATH item=109 name=(null) inode=14154 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:56:34.421000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:56:34.631625 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 00:56:34.645913 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 00:56:34.675586 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:56:34.699828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:56:34.709245 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:56:34.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.719852 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:56:34.751142 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:56:34.785278 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:56:34.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.794163 systemd[1]: Reached target cryptsetup.target. Aug 13 00:56:34.804571 systemd[1]: Starting lvm2-activation.service... Aug 13 00:56:34.811908 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:56:34.838318 systemd[1]: Finished lvm2-activation.service. Aug 13 00:56:34.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.847203 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:56:34.855798 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:56:34.855848 systemd[1]: Reached target local-fs.target. Aug 13 00:56:34.864766 systemd[1]: Reached target machines.target. Aug 13 00:56:34.875529 systemd[1]: Starting ldconfig.service... Aug 13 00:56:34.884669 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:56:34.884778 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:34.886748 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:56:34.896917 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:56:34.909081 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:56:34.911674 systemd[1]: Starting systemd-sysext.service... Aug 13 00:56:34.912536 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) Aug 13 00:56:34.916511 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:56:34.937490 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:56:34.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:34.952830 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:56:34.964622 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:56:34.965082 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:56:34.991622 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:56:35.101035 systemd-fsck[1125]: fsck.fat 4.2 (2021-01-31) Aug 13 00:56:35.101035 systemd-fsck[1125]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 00:56:35.104006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:56:35.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.117964 systemd[1]: Mounting boot.mount... Aug 13 00:56:35.137185 systemd[1]: Mounted boot.mount. Aug 13 00:56:35.174578 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:56:35.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.389910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:56:35.392715 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:56:35.398931 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:56:35.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.434614 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:56:35.470667 (sd-sysext)[1135]: Using extensions 'kubernetes'. Aug 13 00:56:35.473451 (sd-sysext)[1135]: Merged extensions into '/usr'. Aug 13 00:56:35.502406 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:35.505181 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:56:35.512921 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:56:35.515541 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:56:35.525898 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:56:35.535275 systemd[1]: Starting modprobe@loop.service... Aug 13 00:56:35.543837 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:56:35.544099 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:35.544306 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:35.550218 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:56:35.558372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:56:35.558717 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:56:35.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.568861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:56:35.569179 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:56:35.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.578898 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:56:35.579319 systemd[1]: Finished modprobe@loop.service. Aug 13 00:56:35.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.588783 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:56:35.589020 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:56:35.592060 systemd[1]: Finished systemd-sysext.service. Aug 13 00:56:35.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:35.603650 systemd[1]: Starting ensure-sysext.service... Aug 13 00:56:35.614618 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:56:35.627709 systemd[1]: Reloading. Aug 13 00:56:35.630796 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:56:35.650982 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:56:35.654150 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:56:35.660031 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:56:35.770359 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-08-13T00:56:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:56:35.770413 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-08-13T00:56:35Z" level=info msg="torcx already run" Aug 13 00:56:35.975618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:56:35.975650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:56:35.977782 systemd-networkd[1087]: eth0: Gained IPv6LL Aug 13 00:56:36.004147 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:56:36.089694 systemd[1]: Finished ldconfig.service. Aug 13 00:56:36.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.101237 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:56:36.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.116878 systemd[1]: Starting audit-rules.service... Aug 13 00:56:36.127316 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:56:36.139006 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 00:56:36.151219 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:56:36.163759 systemd[1]: Starting systemd-resolved.service... Aug 13 00:56:36.175097 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:56:36.187964 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:56:36.199625 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:56:36.204000 audit[1246]: SYSTEM_BOOT pid=1246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.210186 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 00:56:36.210735 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 00:56:36.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:36.230943 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:56:36.235000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:56:36.235000 audit[1254]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd707b8e90 a2=420 a3=0 items=0 ppid=1221 pid=1254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:36.237751 augenrules[1254]: No rules Aug 13 00:56:36.235000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:56:36.242300 systemd[1]: Finished audit-rules.service. Aug 13 00:56:36.255733 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.256360 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.259545 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:56:36.269775 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:56:36.279805 systemd[1]: Starting modprobe@loop.service... Aug 13 00:56:36.290013 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 00:56:36.298918 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.299260 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:36.302897 systemd[1]: Starting systemd-update-done.service... Aug 13 00:56:36.309747 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:56:36.310035 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.313281 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:56:36.317798 enable-oslogin[1267]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 00:56:36.322896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:56:36.323268 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:56:36.332891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:56:36.333273 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:56:36.342813 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:56:36.343147 systemd[1]: Finished modprobe@loop.service. Aug 13 00:56:36.352689 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 00:56:36.353024 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 00:56:36.362779 systemd[1]: Finished systemd-update-done.service. Aug 13 00:56:36.376834 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.377403 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.382343 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:56:36.392661 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:56:36.402355 systemd[1]: Starting modprobe@loop.service... Aug 13 00:56:36.412788 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 00:56:36.421790 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.422101 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:36.422321 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:56:36.422473 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.424930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:56:36.425259 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:56:36.426710 enable-oslogin[1279]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 00:56:36.434673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:56:36.434998 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:56:36.444725 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:56:36.445048 systemd[1]: Finished modprobe@loop.service. Aug 13 00:56:36.454751 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 00:56:36.455234 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 00:56:36.464638 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:56:36.464863 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.470915 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.471503 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.475294 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:56:36.485693 systemd[1]: Starting modprobe@drm.service... Aug 13 00:56:36.495482 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:56:36.506027 systemd[1]: Starting modprobe@loop.service... Aug 13 00:56:36.519038 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 00:56:36.528018 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.123695 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:36.179115 systemd-journald[1035]: Time jumped backwards, rotating. Aug 13 00:56:36.179292 enable-oslogin[1289]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 00:56:36.127109 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:56:36.132299 systemd-timesyncd[1240]: Contacted time server 169.254.169.254:123 (169.254.169.254). Aug 13 00:56:36.132417 systemd-timesyncd[1240]: Initial clock synchronization to Wed 2025-08-13 00:56:36.123635 UTC. Aug 13 00:56:36.138873 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:56:36.139183 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:56:36.141844 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:56:36.152672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:56:36.153040 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:56:36.157916 systemd-resolved[1234]: Positive Trust Anchors: Aug 13 00:56:36.157935 systemd-resolved[1234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:56:36.157995 systemd-resolved[1234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:56:36.163968 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:56:36.164315 systemd[1]: Finished modprobe@drm.service. Aug 13 00:56:36.172353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:56:36.172715 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:56:36.181766 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:56:36.181992 systemd[1]: Finished modprobe@loop.service. Aug 13 00:56:36.191772 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 00:56:36.192200 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 00:56:36.200479 systemd-resolved[1234]: Defaulting to hostname 'linux'. Aug 13 00:56:36.201930 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:56:36.212433 systemd[1]: Started systemd-resolved.service. Aug 13 00:56:36.221931 systemd[1]: Reached target network.target. Aug 13 00:56:36.230873 systemd[1]: Reached target network-online.target. Aug 13 00:56:36.239829 systemd[1]: Reached target nss-lookup.target. Aug 13 00:56:36.248806 systemd[1]: Reached target time-set.target. Aug 13 00:56:36.257856 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:56:36.257934 systemd[1]: Reached target sysinit.target. Aug 13 00:56:36.266965 systemd[1]: Started motdgen.path. Aug 13 00:56:36.274893 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:56:36.285078 systemd[1]: Started logrotate.timer. Aug 13 00:56:36.293051 systemd[1]: Started mdadm.timer. Aug 13 00:56:36.300817 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:56:36.309808 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:56:36.309880 systemd[1]: Reached target paths.target. Aug 13 00:56:36.316812 systemd[1]: Reached target timers.target. Aug 13 00:56:36.324608 systemd[1]: Listening on dbus.socket. Aug 13 00:56:36.333536 systemd[1]: Starting docker.socket... Aug 13 00:56:36.343218 systemd[1]: Listening on sshd.socket. Aug 13 00:56:36.350943 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:36.351103 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.352174 systemd[1]: Finished ensure-sysext.service. Aug 13 00:56:36.361073 systemd[1]: Listening on docker.socket. Aug 13 00:56:36.369056 systemd[1]: Reached target sockets.target. Aug 13 00:56:36.377791 systemd[1]: Reached target basic.target. Aug 13 00:56:36.385074 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:56:36.385171 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.385210 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:56:36.386904 systemd[1]: Starting containerd.service... Aug 13 00:56:36.395690 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 00:56:36.407145 systemd[1]: Starting dbus.service... Aug 13 00:56:36.417282 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:56:36.427396 systemd[1]: Starting extend-filesystems.service... Aug 13 00:56:36.435801 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:56:36.463932 jq[1304]: false Aug 13 00:56:36.439621 systemd[1]: Starting kubelet.service... Aug 13 00:56:36.447122 systemd[1]: Starting motdgen.service... Aug 13 00:56:36.455798 systemd[1]: Starting oem-gce.service... Aug 13 00:56:36.465851 systemd[1]: Starting prepare-helm.service... Aug 13 00:56:36.476072 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:56:36.486968 systemd[1]: Starting sshd-keygen.service... Aug 13 00:56:36.499004 systemd[1]: Starting systemd-logind.service... Aug 13 00:56:36.505790 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:56:36.505936 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 00:56:36.508960 systemd[1]: Starting update-engine.service... Aug 13 00:56:36.518034 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:56:36.533450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:56:36.533890 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:56:36.545205 jq[1326]: true Aug 13 00:56:36.559501 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:56:36.560021 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:56:36.598890 jq[1336]: true Aug 13 00:56:36.601006 mkfs.ext4[1339]: mke2fs 1.46.5 (30-Dec-2021) Aug 13 00:56:36.610813 mkfs.ext4[1339]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Aug 13 00:56:36.610813 mkfs.ext4[1339]: Creating filesystem with 262144 4k blocks and 65536 inodes Aug 13 00:56:36.610813 mkfs.ext4[1339]: Filesystem UUID: 8cb73d89-2362-4178-80dd-cc19d7f97e82 Aug 13 00:56:36.610813 mkfs.ext4[1339]: Superblock backups stored on blocks: Aug 13 00:56:36.610813 mkfs.ext4[1339]: 32768, 98304, 163840, 229376 Aug 13 00:56:36.610813 mkfs.ext4[1339]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 00:56:36.610813 mkfs.ext4[1339]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 00:56:36.614672 mkfs.ext4[1339]: Creating journal (8192 blocks): done Aug 13 00:56:36.616941 extend-filesystems[1305]: Found loop1 Aug 13 00:56:36.626940 mkfs.ext4[1339]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 00:56:36.654414 extend-filesystems[1305]: Found sda Aug 13 00:56:36.654414 extend-filesystems[1305]: Found sda1 Aug 13 00:56:36.654414 extend-filesystems[1305]: Found sda2 Aug 13 00:56:36.654414 extend-filesystems[1305]: Found sda3 Aug 13 00:56:36.687140 extend-filesystems[1305]: Found usr Aug 13 00:56:36.687140 extend-filesystems[1305]: Found sda4 Aug 13 00:56:36.687140 extend-filesystems[1305]: Found sda6 Aug 13 00:56:36.687140 extend-filesystems[1305]: Found sda7 Aug 13 00:56:36.687140 extend-filesystems[1305]: Found sda9 Aug 13 00:56:36.687140 extend-filesystems[1305]: Checking size of /dev/sda9 Aug 13 00:56:36.766088 kernel: loop2: detected capacity change from 0 to 2097152 Aug 13 00:56:36.766162 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 00:56:36.693333 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:56:36.766666 extend-filesystems[1305]: Resized partition /dev/sda9 Aug 13 00:56:36.693809 systemd[1]: Finished motdgen.service. Aug 13 00:56:36.780352 umount[1358]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Aug 13 00:56:36.780846 extend-filesystems[1371]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:56:36.790270 tar[1333]: linux-amd64/helm Aug 13 00:56:36.795021 dbus-daemon[1303]: [system] SELinux support is enabled Aug 13 00:56:36.799522 systemd[1]: Started dbus.service. Aug 13 00:56:36.804025 update_engine[1324]: I0813 00:56:36.803850 1324 main.cc:92] Flatcar Update Engine starting Aug 13 00:56:36.810533 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:56:36.810610 systemd[1]: Reached target system-config.target. Aug 13 00:56:36.817315 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:56:36.817374 systemd[1]: Reached target user-config.target. Aug 13 00:56:36.826960 dbus-daemon[1303]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1087 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:56:36.836578 dbus-daemon[1303]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:56:36.849408 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:56:36.849530 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 00:56:36.870552 update_engine[1324]: I0813 00:56:36.870369 1324 update_check_scheduler.cc:74] Next update check in 6m35s Aug 13 00:56:36.879150 systemd[1]: Starting systemd-hostnamed.service... Aug 13 00:56:36.880968 extend-filesystems[1371]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:56:36.880968 extend-filesystems[1371]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 00:56:36.880968 extend-filesystems[1371]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 00:56:36.916981 bash[1378]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:56:36.896222 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:56:36.917246 extend-filesystems[1305]: Resized filesystem in /dev/sda9 Aug 13 00:56:36.896774 systemd[1]: Finished extend-filesystems.service. Aug 13 00:56:36.902137 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:56:36.938090 systemd[1]: Started update-engine.service. Aug 13 00:56:36.956576 systemd[1]: Started locksmithd.service. Aug 13 00:56:36.959676 env[1335]: time="2025-08-13T00:56:36.959536488Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:56:37.027430 systemd-logind[1323]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:56:37.043741 systemd-logind[1323]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 00:56:37.044000 systemd-logind[1323]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:56:37.051764 systemd-logind[1323]: New seat seat0. Aug 13 00:56:37.059614 systemd[1]: Started systemd-logind.service. Aug 13 00:56:37.115053 coreos-metadata[1302]: Aug 13 00:56:37.114 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 00:56:37.120179 coreos-metadata[1302]: Aug 13 00:56:37.120 INFO Fetch failed with 404: resource not found Aug 13 00:56:37.120179 coreos-metadata[1302]: Aug 13 00:56:37.120 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 00:56:37.120996 coreos-metadata[1302]: Aug 13 00:56:37.120 INFO Fetch successful Aug 13 00:56:37.120996 coreos-metadata[1302]: Aug 13 00:56:37.120 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 00:56:37.123243 coreos-metadata[1302]: Aug 13 00:56:37.123 INFO Fetch failed with 404: resource not found Aug 13 00:56:37.123624 coreos-metadata[1302]: Aug 13 00:56:37.123 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 00:56:37.125218 coreos-metadata[1302]: Aug 13 00:56:37.125 INFO Fetch failed with 404: resource not found Aug 13 00:56:37.125458 coreos-metadata[1302]: Aug 13 00:56:37.125 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 00:56:37.133200 coreos-metadata[1302]: Aug 13 00:56:37.133 INFO Fetch successful Aug 13 00:56:37.135944 unknown[1302]: wrote ssh authorized keys file for user: core Aug 13 00:56:37.187819 update-ssh-keys[1409]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:56:37.189252 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 00:56:37.233901 env[1335]: time="2025-08-13T00:56:37.233825984Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:56:37.234259 env[1335]: time="2025-08-13T00:56:37.234225747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.236802 env[1335]: time="2025-08-13T00:56:37.236746311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:56:37.244712 env[1335]: time="2025-08-13T00:56:37.244658234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.252964 env[1335]: time="2025-08-13T00:56:37.252904494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:56:37.253166 env[1335]: time="2025-08-13T00:56:37.253136901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.253280 env[1335]: time="2025-08-13T00:56:37.253254829Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:56:37.253377 env[1335]: time="2025-08-13T00:56:37.253354122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.253696 env[1335]: time="2025-08-13T00:56:37.253662794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.254212 env[1335]: time="2025-08-13T00:56:37.254182051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:56:37.254967 env[1335]: time="2025-08-13T00:56:37.254931237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:56:37.260007 env[1335]: time="2025-08-13T00:56:37.259958080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:56:37.260273 env[1335]: time="2025-08-13T00:56:37.260241681Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:56:37.261659 env[1335]: time="2025-08-13T00:56:37.261627174Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:56:37.275735 env[1335]: time="2025-08-13T00:56:37.275682808Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:56:37.275964 env[1335]: time="2025-08-13T00:56:37.275934202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:56:37.276082 env[1335]: time="2025-08-13T00:56:37.276059262Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:56:37.276250 env[1335]: time="2025-08-13T00:56:37.276224151Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276430 env[1335]: time="2025-08-13T00:56:37.276408171Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276534 env[1335]: time="2025-08-13T00:56:37.276514207Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276664 env[1335]: time="2025-08-13T00:56:37.276640302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276776 env[1335]: time="2025-08-13T00:56:37.276755196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276886 env[1335]: time="2025-08-13T00:56:37.276866345Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.276987 env[1335]: time="2025-08-13T00:56:37.276967776Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.277155 env[1335]: time="2025-08-13T00:56:37.277132896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.277264 env[1335]: time="2025-08-13T00:56:37.277244077Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:56:37.277502 env[1335]: time="2025-08-13T00:56:37.277478176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:56:37.277751 env[1335]: time="2025-08-13T00:56:37.277730935Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:56:37.278430 env[1335]: time="2025-08-13T00:56:37.278398650Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:56:37.278606 env[1335]: time="2025-08-13T00:56:37.278563731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.278735 env[1335]: time="2025-08-13T00:56:37.278710186Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:56:37.278896 env[1335]: time="2025-08-13T00:56:37.278872265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279002 env[1335]: time="2025-08-13T00:56:37.278980070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279102 env[1335]: time="2025-08-13T00:56:37.279080530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279208 env[1335]: time="2025-08-13T00:56:37.279185852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279440 env[1335]: time="2025-08-13T00:56:37.279417365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279625 env[1335]: time="2025-08-13T00:56:37.279565299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279760 env[1335]: time="2025-08-13T00:56:37.279737431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.279891 env[1335]: time="2025-08-13T00:56:37.279870767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.280041 env[1335]: time="2025-08-13T00:56:37.280018790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:56:37.280396 env[1335]: time="2025-08-13T00:56:37.280353926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.280540 env[1335]: time="2025-08-13T00:56:37.280518346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.280688 env[1335]: time="2025-08-13T00:56:37.280665543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.280816 env[1335]: time="2025-08-13T00:56:37.280794509Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:56:37.280959 env[1335]: time="2025-08-13T00:56:37.280933329Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:56:37.281073 env[1335]: time="2025-08-13T00:56:37.281052043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:56:37.281204 env[1335]: time="2025-08-13T00:56:37.281181747Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:56:37.281379 env[1335]: time="2025-08-13T00:56:37.281348959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:56:37.281985 env[1335]: time="2025-08-13T00:56:37.281873944Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:56:37.288478 env[1335]: time="2025-08-13T00:56:37.288435325Z" level=info msg="Connect containerd service" Aug 13 00:56:37.288751 env[1335]: time="2025-08-13T00:56:37.288705630Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:56:37.290698 env[1335]: time="2025-08-13T00:56:37.290635970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:56:37.298108 env[1335]: time="2025-08-13T00:56:37.298061509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:56:37.304515 env[1335]: time="2025-08-13T00:56:37.304454586Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:56:37.304966 systemd[1]: Started containerd.service. Aug 13 00:56:37.305347 env[1335]: time="2025-08-13T00:56:37.305300173Z" level=info msg="containerd successfully booted in 0.358666s" Aug 13 00:56:37.335305 env[1335]: time="2025-08-13T00:56:37.304279084Z" level=info msg="Start subscribing containerd event" Aug 13 00:56:37.335638 env[1335]: time="2025-08-13T00:56:37.335560834Z" level=info msg="Start recovering state" Aug 13 00:56:37.335969 env[1335]: time="2025-08-13T00:56:37.335926548Z" level=info msg="Start event monitor" Aug 13 00:56:37.336113 env[1335]: time="2025-08-13T00:56:37.336092509Z" level=info msg="Start snapshots syncer" Aug 13 00:56:37.336233 env[1335]: time="2025-08-13T00:56:37.336210962Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:56:37.336355 env[1335]: time="2025-08-13T00:56:37.336335820Z" level=info msg="Start streaming server" Aug 13 00:56:37.526756 dbus-daemon[1303]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:56:37.527016 systemd[1]: Started systemd-hostnamed.service. Aug 13 00:56:37.528192 dbus-daemon[1303]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1383 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:56:37.544121 systemd[1]: Starting polkit.service... Aug 13 00:56:37.618130 polkitd[1415]: Started polkitd version 121 Aug 13 00:56:37.666073 polkitd[1415]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:56:37.666175 polkitd[1415]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:56:37.668791 polkitd[1415]: Finished loading, compiling and executing 2 rules Aug 13 00:56:37.671445 dbus-daemon[1303]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:56:37.671713 systemd[1]: Started polkit.service. Aug 13 00:56:37.672654 polkitd[1415]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:56:37.696769 systemd-hostnamed[1383]: Hostname set to (transient) Aug 13 00:56:37.700466 systemd-resolved[1234]: System hostname changed to 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal'. Aug 13 00:56:38.327214 tar[1333]: linux-amd64/LICENSE Aug 13 00:56:38.327837 tar[1333]: linux-amd64/README.md Aug 13 00:56:38.346068 systemd[1]: Finished prepare-helm.service. Aug 13 00:56:39.297056 systemd[1]: Started kubelet.service. Aug 13 00:56:39.731438 locksmithd[1393]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:56:40.645726 kubelet[1434]: E0813 00:56:40.645663 1434 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:56:40.648583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:56:40.648901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:56:40.764656 sshd_keygen[1356]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:56:40.810190 systemd[1]: Finished sshd-keygen.service. Aug 13 00:56:40.820250 systemd[1]: Starting issuegen.service... Aug 13 00:56:40.830540 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:56:40.830960 systemd[1]: Finished issuegen.service. Aug 13 00:56:40.841931 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:56:40.855416 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:56:40.866085 systemd[1]: Started getty@tty1.service. Aug 13 00:56:40.877742 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:56:40.887155 systemd[1]: Reached target getty.target. Aug 13 00:56:43.229124 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Aug 13 00:56:44.793054 systemd[1]: Created slice system-sshd.slice. Aug 13 00:56:44.804266 systemd[1]: Started sshd@0-10.128.0.76:22-139.178.68.195:54322.service. Aug 13 00:56:45.142693 sshd[1461]: Accepted publickey for core from 139.178.68.195 port 54322 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:45.148223 sshd[1461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:45.168493 systemd[1]: Created slice user-500.slice. Aug 13 00:56:45.178762 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:56:45.190158 systemd-logind[1323]: New session 1 of user core. Aug 13 00:56:45.198191 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:56:45.209008 systemd[1]: Starting user@500.service... Aug 13 00:56:45.232656 (systemd)[1466]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:45.397314 systemd[1466]: Queued start job for default target default.target. Aug 13 00:56:45.398140 systemd[1466]: Reached target paths.target. Aug 13 00:56:45.398172 systemd[1466]: Reached target sockets.target. Aug 13 00:56:45.398195 systemd[1466]: Reached target timers.target. Aug 13 00:56:45.398215 systemd[1466]: Reached target basic.target. Aug 13 00:56:45.398406 systemd[1]: Started user@500.service. Aug 13 00:56:45.399286 systemd[1466]: Reached target default.target. Aug 13 00:56:45.399377 systemd[1466]: Startup finished in 155ms. Aug 13 00:56:45.408919 systemd[1]: Started session-1.scope. Aug 13 00:56:45.420710 kernel: loop2: detected capacity change from 0 to 2097152 Aug 13 00:56:45.442959 systemd-nspawn[1472]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Aug 13 00:56:45.442959 systemd-nspawn[1472]: Press ^] three times within 1s to kill container. Aug 13 00:56:45.461677 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:56:45.537781 systemd[1]: Started oem-gce.service. Aug 13 00:56:45.545329 systemd[1]: Reached target multi-user.target. Aug 13 00:56:45.556707 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:56:45.575137 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:56:45.575554 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:56:45.593614 systemd[1]: Startup finished in 10.372s (kernel) + 17.901s (userspace) = 28.273s. Aug 13 00:56:45.622179 systemd-nspawn[1472]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 00:56:45.622179 systemd-nspawn[1472]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 00:56:45.622179 systemd-nspawn[1472]: + /usr/bin/google_instance_setup Aug 13 00:56:45.641442 systemd[1]: Started sshd@1-10.128.0.76:22-139.178.68.195:54332.service. Aug 13 00:56:45.941890 sshd[1484]: Accepted publickey for core from 139.178.68.195 port 54332 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:45.943008 sshd[1484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:45.950677 systemd-logind[1323]: New session 2 of user core. Aug 13 00:56:45.951510 systemd[1]: Started session-2.scope. Aug 13 00:56:46.158909 sshd[1484]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:46.163652 systemd[1]: sshd@1-10.128.0.76:22-139.178.68.195:54332.service: Deactivated successfully. Aug 13 00:56:46.164992 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:56:46.167968 systemd-logind[1323]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:56:46.170785 systemd-logind[1323]: Removed session 2. Aug 13 00:56:46.201745 systemd[1]: Started sshd@2-10.128.0.76:22-139.178.68.195:54348.service. Aug 13 00:56:46.379560 instance-setup[1483]: INFO Running google_set_multiqueue. Aug 13 00:56:46.400042 instance-setup[1483]: INFO Set channels for eth0 to 2. Aug 13 00:56:46.404216 instance-setup[1483]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 13 00:56:46.405811 instance-setup[1483]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 13 00:56:46.406279 instance-setup[1483]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 13 00:56:46.407992 instance-setup[1483]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 13 00:56:46.408353 instance-setup[1483]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 13 00:56:46.410268 instance-setup[1483]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 13 00:56:46.410758 instance-setup[1483]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 13 00:56:46.412838 instance-setup[1483]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 13 00:56:46.427846 instance-setup[1483]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 00:56:46.428275 instance-setup[1483]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 00:56:46.482356 systemd-nspawn[1472]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 00:56:46.501383 sshd[1493]: Accepted publickey for core from 139.178.68.195 port 54348 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:46.503238 sshd[1493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:46.513919 systemd[1]: Started session-3.scope. Aug 13 00:56:46.515876 systemd-logind[1323]: New session 3 of user core. Aug 13 00:56:46.709926 sshd[1493]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:46.718404 systemd-logind[1323]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:56:46.721285 systemd[1]: sshd@2-10.128.0.76:22-139.178.68.195:54348.service: Deactivated successfully. Aug 13 00:56:46.722550 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:56:46.725079 systemd-logind[1323]: Removed session 3. Aug 13 00:56:46.756398 systemd[1]: Started sshd@3-10.128.0.76:22-139.178.68.195:54350.service. Aug 13 00:56:46.900137 startup-script[1523]: INFO Starting startup scripts. Aug 13 00:56:46.913844 startup-script[1523]: INFO No startup scripts found in metadata. Aug 13 00:56:46.914105 startup-script[1523]: INFO Finished running startup scripts. Aug 13 00:56:46.954782 systemd-nspawn[1472]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 00:56:46.954927 systemd-nspawn[1472]: + daemon_pids=() Aug 13 00:56:46.955002 systemd-nspawn[1472]: + for d in accounts clock_skew network Aug 13 00:56:46.955362 systemd-nspawn[1472]: + daemon_pids+=($!) Aug 13 00:56:46.955482 systemd-nspawn[1472]: + for d in accounts clock_skew network Aug 13 00:56:46.955804 systemd-nspawn[1472]: + daemon_pids+=($!) Aug 13 00:56:46.955907 systemd-nspawn[1472]: + for d in accounts clock_skew network Aug 13 00:56:46.956228 systemd-nspawn[1472]: + daemon_pids+=($!) Aug 13 00:56:46.956403 systemd-nspawn[1472]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 00:56:46.956486 systemd-nspawn[1472]: + /usr/bin/systemd-notify --ready Aug 13 00:56:46.957069 systemd-nspawn[1472]: + /usr/bin/google_network_daemon Aug 13 00:56:46.957725 systemd-nspawn[1472]: + /usr/bin/google_clock_skew_daemon Aug 13 00:56:46.964671 systemd-nspawn[1472]: + /usr/bin/google_accounts_daemon Aug 13 00:56:47.032905 systemd-nspawn[1472]: + wait -n 36 37 38 Aug 13 00:56:47.055629 sshd[1529]: Accepted publickey for core from 139.178.68.195 port 54350 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:47.056993 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:47.066212 systemd[1]: Started session-4.scope. Aug 13 00:56:47.067685 systemd-logind[1323]: New session 4 of user core. Aug 13 00:56:47.278889 sshd[1529]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:47.284058 systemd[1]: sshd@3-10.128.0.76:22-139.178.68.195:54350.service: Deactivated successfully. Aug 13 00:56:47.286139 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:56:47.286153 systemd-logind[1323]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:56:47.288242 systemd-logind[1323]: Removed session 4. Aug 13 00:56:47.323847 systemd[1]: Started sshd@4-10.128.0.76:22-139.178.68.195:54358.service. Aug 13 00:56:47.633223 sshd[1542]: Accepted publickey for core from 139.178.68.195 port 54358 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:47.634334 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:47.642733 systemd-logind[1323]: New session 5 of user core. Aug 13 00:56:47.643539 systemd[1]: Started session-5.scope. Aug 13 00:56:47.731753 google-clock-skew[1534]: INFO Starting Google Clock Skew daemon. Aug 13 00:56:47.761636 google-clock-skew[1534]: INFO Clock drift token has changed: 0. Aug 13 00:56:47.773284 systemd-nspawn[1472]: hwclock: Cannot access the Hardware Clock via any known method. Aug 13 00:56:47.773962 systemd-nspawn[1472]: hwclock: Use the --verbose option to see the details of our search for an access method. Aug 13 00:56:47.775138 google-clock-skew[1534]: WARNING Failed to sync system time with hardware clock. Aug 13 00:56:47.798001 google-networking[1535]: INFO Starting Google Networking daemon. Aug 13 00:56:47.846708 sudo[1553]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:56:47.847187 sudo[1553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:56:47.858618 sudo[1553]: pam_unix(sudo:session): session closed for user root Aug 13 00:56:47.859523 dbus-daemon[1303]: \xd0=\xd9/BV: received setenforce notice (enforcing=101089744) Aug 13 00:56:47.877426 groupadd[1556]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 00:56:47.882355 groupadd[1556]: group added to /etc/gshadow: name=google-sudoers Aug 13 00:56:47.888534 groupadd[1556]: new group: name=google-sudoers, GID=1000 Aug 13 00:56:47.908105 sshd[1542]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:47.912508 google-accounts[1533]: INFO Starting Google Accounts daemon. Aug 13 00:56:47.913760 systemd[1]: sshd@4-10.128.0.76:22-139.178.68.195:54358.service: Deactivated successfully. Aug 13 00:56:47.915147 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:56:47.917553 systemd-logind[1323]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:56:47.922954 systemd-logind[1323]: Removed session 5. Aug 13 00:56:47.944812 google-accounts[1533]: WARNING OS Login not installed. Aug 13 00:56:47.946147 google-accounts[1533]: INFO Creating a new user account for 0. Aug 13 00:56:47.950534 systemd[1]: Started sshd@5-10.128.0.76:22-139.178.68.195:54374.service. Aug 13 00:56:47.957348 systemd-nspawn[1472]: useradd: invalid user name '0': use --badname to ignore Aug 13 00:56:47.958548 google-accounts[1533]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 00:56:47.964793 google-accounts[1533]: ERROR Exception calling the response handler. [Errno 13] Permission denied: '/var/lib/google'. Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/google_compute_engine/metadata_watcher.py", line 200, in WatchMetadata handler(response) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_daemon.py", line 285, in HandleAccounts self.utils.SetConfiguredUsers(desired_users.keys()) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_utils.py", line 324, in SetConfiguredUsers os.makedirs(self.google_users_dir) File "/usr/lib/python-exec/python3.9/../../../lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/var/lib/google' Aug 13 00:56:48.243342 sshd[1567]: Accepted publickey for core from 139.178.68.195 port 54374 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:48.244853 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:48.251824 systemd[1]: Started session-6.scope. Aug 13 00:56:48.252176 systemd-logind[1323]: New session 6 of user core. Aug 13 00:56:48.422694 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:56:48.423141 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:56:48.428541 sudo[1572]: pam_unix(sudo:session): session closed for user root Aug 13 00:56:48.443754 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:56:48.444204 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:56:48.459630 systemd[1]: Stopping audit-rules.service... Aug 13 00:56:48.460000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:56:48.467894 kernel: kauditd_printk_skb: 134 callbacks suppressed Aug 13 00:56:48.468026 kernel: audit: type=1305 audit(1755046608.460:141): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:56:48.468076 auditctl[1575]: No rules Aug 13 00:56:48.469309 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:56:48.469746 systemd[1]: Stopped audit-rules.service. Aug 13 00:56:48.473904 systemd[1]: Starting audit-rules.service... Aug 13 00:56:48.460000 audit[1575]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1b9a1540 a2=420 a3=0 items=0 ppid=1 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.460000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:56:48.529847 kernel: audit: type=1300 audit(1755046608.460:141): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1b9a1540 a2=420 a3=0 items=0 ppid=1 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.529994 kernel: audit: type=1327 audit(1755046608.460:141): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:56:48.530052 augenrules[1593]: No rules Aug 13 00:56:48.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.539158 systemd[1]: Finished audit-rules.service. Aug 13 00:56:48.541139 sudo[1571]: pam_unix(sudo:session): session closed for user root Aug 13 00:56:48.552667 kernel: audit: type=1131 audit(1755046608.466:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.552830 kernel: audit: type=1130 audit(1755046608.537:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.539000 audit[1571]: USER_END pid=1571 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.586917 sshd[1567]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:48.594353 systemd-logind[1323]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:56:48.596945 systemd[1]: sshd@5-10.128.0.76:22-139.178.68.195:54374.service: Deactivated successfully. Aug 13 00:56:48.598246 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:56:48.601202 kernel: audit: type=1106 audit(1755046608.539:144): pid=1571 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.600705 systemd-logind[1323]: Removed session 6. Aug 13 00:56:48.539000 audit[1571]: CRED_DISP pid=1571 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.624871 kernel: audit: type=1104 audit(1755046608.539:145): pid=1571 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.625029 kernel: audit: type=1106 audit(1755046608.587:146): pid=1567 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.587000 audit[1567]: USER_END pid=1567 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.587000 audit[1567]: CRED_DISP pid=1567 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.665785 systemd[1]: Started sshd@6-10.128.0.76:22-139.178.68.195:54378.service. Aug 13 00:56:48.682671 kernel: audit: type=1104 audit(1755046608.587:147): pid=1567 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.682802 kernel: audit: type=1131 audit(1755046608.595:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.76:22-139.178.68.195:54374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.76:22-139.178.68.195:54374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.76:22-139.178.68.195:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.958000 audit[1600]: USER_ACCT pid=1600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.960223 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 54378 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:56:48.960000 audit[1600]: CRED_ACQ pid=1600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.960000 audit[1600]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0224ce90 a2=3 a3=0 items=0 ppid=1 pid=1600 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.960000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:48.962290 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:48.970283 systemd[1]: Started session-7.scope. Aug 13 00:56:48.970921 systemd-logind[1323]: New session 7 of user core. Aug 13 00:56:48.978000 audit[1600]: USER_START pid=1600 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:48.981000 audit[1603]: CRED_ACQ pid=1603 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:49.136000 audit[1604]: USER_ACCT pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:49.138754 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:56:49.137000 audit[1604]: CRED_REFR pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:49.139205 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:56:49.140000 audit[1604]: USER_START pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:56:49.176102 systemd[1]: Starting docker.service... Aug 13 00:56:49.231129 env[1614]: time="2025-08-13T00:56:49.230959495Z" level=info msg="Starting up" Aug 13 00:56:49.234213 env[1614]: time="2025-08-13T00:56:49.234179198Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:56:49.234371 env[1614]: time="2025-08-13T00:56:49.234346344Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:56:49.234491 env[1614]: time="2025-08-13T00:56:49.234469556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:56:49.234573 env[1614]: time="2025-08-13T00:56:49.234555179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:56:49.237187 env[1614]: time="2025-08-13T00:56:49.237123244Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:56:49.237187 env[1614]: time="2025-08-13T00:56:49.237153099Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:56:49.237187 env[1614]: time="2025-08-13T00:56:49.237179559Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:56:49.237187 env[1614]: time="2025-08-13T00:56:49.237195222Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:56:49.806902 env[1614]: time="2025-08-13T00:56:49.806832846Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:56:49.806902 env[1614]: time="2025-08-13T00:56:49.806866310Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:56:49.807285 env[1614]: time="2025-08-13T00:56:49.807256376Z" level=info msg="Loading containers: start." Aug 13 00:56:49.898000 audit[1644]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.898000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc1db7bdb0 a2=0 a3=7ffc1db7bd9c items=0 ppid=1614 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:56:49.901000 audit[1646]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.901000 audit[1646]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcb79a2690 a2=0 a3=7ffcb79a267c items=0 ppid=1614 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.901000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:56:49.904000 audit[1648]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.904000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc914689f0 a2=0 a3=7ffc914689dc items=0 ppid=1614 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.904000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:56:49.907000 audit[1650]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.907000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffce48c5220 a2=0 a3=7ffce48c520c items=0 ppid=1614 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.907000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:56:49.911000 audit[1652]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.911000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc87b653a0 a2=0 a3=7ffc87b6538c items=0 ppid=1614 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.911000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:56:49.933000 audit[1657]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.933000 audit[1657]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc185be0f0 a2=0 a3=7ffc185be0dc items=0 ppid=1614 pid=1657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.933000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:56:49.946000 audit[1659]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.946000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc3cf25f0 a2=0 a3=7ffdc3cf25dc items=0 ppid=1614 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.946000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:56:49.950000 audit[1661]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.950000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff31f99430 a2=0 a3=7fff31f9941c items=0 ppid=1614 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.950000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:56:49.953000 audit[1663]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.953000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd601e3870 a2=0 a3=7ffd601e385c items=0 ppid=1614 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.953000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:56:49.967000 audit[1667]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.967000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe4e547c20 a2=0 a3=7ffe4e547c0c items=0 ppid=1614 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.967000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:56:49.974000 audit[1668]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:49.974000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc88033470 a2=0 a3=7ffc8803345c items=0 ppid=1614 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:56:49.993624 kernel: Initializing XFRM netlink socket Aug 13 00:56:50.041240 env[1614]: time="2025-08-13T00:56:50.041174657Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:56:50.074000 audit[1677]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.074000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd83982cd0 a2=0 a3=7ffd83982cbc items=0 ppid=1614 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.074000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:56:50.090000 audit[1680]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.090000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe897982a0 a2=0 a3=7ffe8979828c items=0 ppid=1614 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.090000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:56:50.095000 audit[1683]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.095000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffff8509ee0 a2=0 a3=7ffff8509ecc items=0 ppid=1614 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.095000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:56:50.098000 audit[1685]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.098000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd61a33830 a2=0 a3=7ffd61a3381c items=0 ppid=1614 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.098000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:56:50.101000 audit[1687]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.101000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff166aaba0 a2=0 a3=7fff166aab8c items=0 ppid=1614 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.101000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:56:50.105000 audit[1689]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.105000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff890f06a0 a2=0 a3=7fff890f068c items=0 ppid=1614 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.105000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:56:50.108000 audit[1691]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.108000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff9d9dbc20 a2=0 a3=7fff9d9dbc0c items=0 ppid=1614 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.108000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:56:50.122000 audit[1694]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.122000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff231339b0 a2=0 a3=7fff2313399c items=0 ppid=1614 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.122000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:56:50.127000 audit[1696]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.127000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffce2301f40 a2=0 a3=7ffce2301f2c items=0 ppid=1614 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.127000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:56:50.130000 audit[1698]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.130000 audit[1698]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe9554b280 a2=0 a3=7ffe9554b26c items=0 ppid=1614 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:56:50.133000 audit[1700]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.133000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffff5590900 a2=0 a3=7ffff55908ec items=0 ppid=1614 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:56:50.136374 systemd-networkd[1087]: docker0: Link UP Aug 13 00:56:50.150000 audit[1704]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.150000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeeb32d000 a2=0 a3=7ffeeb32cfec items=0 ppid=1614 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:56:50.153000 audit[1705]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:56:50.153000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb0d12f30 a2=0 a3=7ffcb0d12f1c items=0 ppid=1614 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.153000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:56:50.156268 env[1614]: time="2025-08-13T00:56:50.156205110Z" level=info msg="Loading containers: done." Aug 13 00:56:50.176687 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3761750488-merged.mount: Deactivated successfully. Aug 13 00:56:50.186716 env[1614]: time="2025-08-13T00:56:50.186652176Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:56:50.187011 env[1614]: time="2025-08-13T00:56:50.186976262Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:56:50.187171 env[1614]: time="2025-08-13T00:56:50.187141842Z" level=info msg="Daemon has completed initialization" Aug 13 00:56:50.211494 systemd[1]: Started docker.service. Aug 13 00:56:50.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:50.224373 env[1614]: time="2025-08-13T00:56:50.224290893Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:56:50.876498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:56:50.876888 systemd[1]: Stopped kubelet.service. Aug 13 00:56:50.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:50.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:50.879567 systemd[1]: Starting kubelet.service... Aug 13 00:56:51.174560 systemd[1]: Started kubelet.service. Aug 13 00:56:51.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:51.242451 kubelet[1745]: E0813 00:56:51.242400 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:56:51.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:56:51.246665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:56:51.246983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:56:51.525927 env[1335]: time="2025-08-13T00:56:51.525757621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:56:52.079915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022284040.mount: Deactivated successfully. Aug 13 00:56:53.902058 env[1335]: time="2025-08-13T00:56:53.901972381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:53.905646 env[1335]: time="2025-08-13T00:56:53.905543911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:53.908541 env[1335]: time="2025-08-13T00:56:53.908478673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:53.911880 env[1335]: time="2025-08-13T00:56:53.911822757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:53.912962 env[1335]: time="2025-08-13T00:56:53.912897173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:56:53.914007 env[1335]: time="2025-08-13T00:56:53.913966048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:56:55.695335 env[1335]: time="2025-08-13T00:56:55.695208541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:55.698831 env[1335]: time="2025-08-13T00:56:55.698765718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:55.702367 env[1335]: time="2025-08-13T00:56:55.702305062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:55.705853 env[1335]: time="2025-08-13T00:56:55.705791979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:55.707299 env[1335]: time="2025-08-13T00:56:55.707185781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:56:55.708372 env[1335]: time="2025-08-13T00:56:55.708332640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:56:57.170047 env[1335]: time="2025-08-13T00:56:57.169953842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:57.173412 env[1335]: time="2025-08-13T00:56:57.173351187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:57.176736 env[1335]: time="2025-08-13T00:56:57.176683432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:57.179130 env[1335]: time="2025-08-13T00:56:57.179076001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:57.180199 env[1335]: time="2025-08-13T00:56:57.180124489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:56:57.181688 env[1335]: time="2025-08-13T00:56:57.181634932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:56:58.321300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251238429.mount: Deactivated successfully. Aug 13 00:56:59.127538 env[1335]: time="2025-08-13T00:56:59.127429630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:59.131352 env[1335]: time="2025-08-13T00:56:59.131275236Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:59.134034 env[1335]: time="2025-08-13T00:56:59.133971391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:59.136509 env[1335]: time="2025-08-13T00:56:59.136449524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:56:59.137452 env[1335]: time="2025-08-13T00:56:59.137370644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:56:59.138312 env[1335]: time="2025-08-13T00:56:59.138240424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:56:59.588761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422805484.mount: Deactivated successfully. Aug 13 00:57:00.933627 env[1335]: time="2025-08-13T00:57:00.933525920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:00.937453 env[1335]: time="2025-08-13T00:57:00.937380354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:00.940941 env[1335]: time="2025-08-13T00:57:00.940869493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:00.944580 env[1335]: time="2025-08-13T00:57:00.944508142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:00.946213 env[1335]: time="2025-08-13T00:57:00.946136606Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:57:00.947158 env[1335]: time="2025-08-13T00:57:00.947089407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:57:01.376995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:57:01.405634 kernel: kauditd_printk_skb: 88 callbacks suppressed Aug 13 00:57:01.405750 kernel: audit: type=1130 audit(1755046621.375:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.377342 systemd[1]: Stopped kubelet.service. Aug 13 00:57:01.380365 systemd[1]: Starting kubelet.service... Aug 13 00:57:01.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.430925 kernel: audit: type=1131 audit(1755046621.375:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.640464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019439283.mount: Deactivated successfully. Aug 13 00:57:01.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.679262 systemd[1]: Started kubelet.service. Aug 13 00:57:01.703628 kernel: audit: type=1130 audit(1755046621.678:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.720686 env[1335]: time="2025-08-13T00:57:01.719382583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:01.726652 env[1335]: time="2025-08-13T00:57:01.724653030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:01.729557 env[1335]: time="2025-08-13T00:57:01.729491152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:01.733634 env[1335]: time="2025-08-13T00:57:01.733549468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:01.735150 env[1335]: time="2025-08-13T00:57:01.735089160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:57:01.737424 env[1335]: time="2025-08-13T00:57:01.737380020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:57:01.778318 kubelet[1761]: E0813 00:57:01.778244 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:57:01.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:57:01.782822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:57:01.783181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:57:01.809655 kernel: audit: type=1131 audit(1755046621.782:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:57:02.219768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116555755.mount: Deactivated successfully. Aug 13 00:57:04.910870 env[1335]: time="2025-08-13T00:57:04.910780532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:04.914296 env[1335]: time="2025-08-13T00:57:04.914186573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:04.917111 env[1335]: time="2025-08-13T00:57:04.917060290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:04.925639 env[1335]: time="2025-08-13T00:57:04.925555845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:04.926911 env[1335]: time="2025-08-13T00:57:04.926845320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:57:07.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:07.701509 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:57:07.726620 kernel: audit: type=1131 audit(1755046627.700:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.624157 systemd[1]: Stopped kubelet.service. Aug 13 00:57:09.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.648684 systemd[1]: Starting kubelet.service... Aug 13 00:57:09.668997 kernel: audit: type=1130 audit(1755046629.623:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.669215 kernel: audit: type=1131 audit(1755046629.624:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.714921 systemd[1]: Reloading. Aug 13 00:57:09.856059 /usr/lib/systemd/system-generators/torcx-generator[1817]: time="2025-08-13T00:57:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:57:09.856120 /usr/lib/systemd/system-generators/torcx-generator[1817]: time="2025-08-13T00:57:09Z" level=info msg="torcx already run" Aug 13 00:57:10.059572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:57:10.059618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:57:10.087437 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:57:10.215930 systemd[1]: Started kubelet.service. Aug 13 00:57:10.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.238619 kernel: audit: type=1130 audit(1755046630.214:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.239839 systemd[1]: Stopping kubelet.service... Aug 13 00:57:10.242121 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:57:10.242568 systemd[1]: Stopped kubelet.service. Aug 13 00:57:10.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.245741 systemd[1]: Starting kubelet.service... Aug 13 00:57:10.265634 kernel: audit: type=1131 audit(1755046630.241:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.500931 systemd[1]: Started kubelet.service. Aug 13 00:57:10.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.526637 kernel: audit: type=1130 audit(1755046630.500:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:10.581644 kubelet[1888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:57:10.582120 kubelet[1888]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:57:10.582187 kubelet[1888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:57:10.582385 kubelet[1888]: I0813 00:57:10.582343 1888 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:57:10.899451 kubelet[1888]: I0813 00:57:10.899386 1888 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:57:10.899451 kubelet[1888]: I0813 00:57:10.899426 1888 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:57:10.899856 kubelet[1888]: I0813 00:57:10.899817 1888 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:57:10.947504 kubelet[1888]: E0813 00:57:10.947453 1888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:10.951817 kubelet[1888]: I0813 00:57:10.951761 1888 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:57:10.973248 kubelet[1888]: E0813 00:57:10.973107 1888 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:57:10.973248 kubelet[1888]: I0813 00:57:10.973221 1888 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:57:10.979558 kubelet[1888]: I0813 00:57:10.979516 1888 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:57:10.980068 kubelet[1888]: I0813 00:57:10.980034 1888 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:57:10.980375 kubelet[1888]: I0813 00:57:10.980292 1888 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:57:10.980661 kubelet[1888]: I0813 00:57:10.980352 1888 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:57:10.980867 kubelet[1888]: I0813 00:57:10.980680 1888 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:57:10.980867 kubelet[1888]: I0813 00:57:10.980698 1888 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:57:10.980867 kubelet[1888]: I0813 00:57:10.980859 1888 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:57:10.993630 kubelet[1888]: I0813 00:57:10.993543 1888 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:57:10.993630 kubelet[1888]: I0813 00:57:10.993648 1888 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:57:10.993953 kubelet[1888]: I0813 00:57:10.993711 1888 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:57:10.993953 kubelet[1888]: I0813 00:57:10.993746 1888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:57:11.009279 kubelet[1888]: W0813 00:57:11.009186 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.009659 kubelet[1888]: E0813 00:57:11.009624 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:11.009927 kubelet[1888]: I0813 00:57:11.009907 1888 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:57:11.010744 kubelet[1888]: I0813 00:57:11.010708 1888 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:57:11.010965 kubelet[1888]: W0813 00:57:11.010946 1888 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:57:11.025058 kubelet[1888]: I0813 00:57:11.024993 1888 server.go:1274] "Started kubelet" Aug 13 00:57:11.045821 kubelet[1888]: W0813 00:57:11.045744 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.046139 kubelet[1888]: E0813 00:57:11.046073 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:11.046348 kubelet[1888]: I0813 00:57:11.046150 1888 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:57:11.047305 kubelet[1888]: I0813 00:57:11.047253 1888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:57:11.047939 kubelet[1888]: I0813 00:57:11.047913 1888 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:57:11.048000 audit[1888]: AVC avc: denied { mac_admin } for pid=1888 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:11.063005 kubelet[1888]: I0813 00:57:11.049572 1888 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:57:11.063005 kubelet[1888]: I0813 00:57:11.049666 1888 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:57:11.063005 kubelet[1888]: I0813 00:57:11.049776 1888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:57:11.063005 kubelet[1888]: I0813 00:57:11.059831 1888 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:57:11.066447 kubelet[1888]: I0813 00:57:11.066376 1888 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:57:11.070454 kubelet[1888]: I0813 00:57:11.070403 1888 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:57:11.071190 kubelet[1888]: E0813 00:57:11.071151 1888 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" not found" Aug 13 00:57:11.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:11.076086 kubelet[1888]: E0813 00:57:11.073457 1888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal.185b2d9cbe7ce31e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,UID:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 00:57:11.024943902 +0000 UTC m=+0.506298299,LastTimestamp:2025-08-13 00:57:11.024943902 +0000 UTC m=+0.506298299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,}" Aug 13 00:57:11.078803 kubelet[1888]: E0813 00:57:11.078753 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="200ms" Aug 13 00:57:11.079718 kubelet[1888]: I0813 00:57:11.079685 1888 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:57:11.080021 kubelet[1888]: I0813 00:57:11.079995 1888 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:57:11.083140 kernel: audit: type=1400 audit(1755046631.048:197): avc: denied { mac_admin } for pid=1888 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:11.083271 kernel: audit: type=1401 audit(1755046631.048:197): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:11.048000 audit[1888]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00093c720 a1=c0008b6e70 a2=c00093c6f0 a3=25 items=0 ppid=1 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.095893 kubelet[1888]: I0813 00:57:11.084278 1888 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:57:11.095893 kubelet[1888]: I0813 00:57:11.086940 1888 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:57:11.095893 kubelet[1888]: I0813 00:57:11.087013 1888 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:57:11.106196 kubelet[1888]: W0813 00:57:11.106076 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.106511 kubelet[1888]: E0813 00:57:11.106461 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:11.115027 kernel: audit: type=1300 audit(1755046631.048:197): arch=c000003e syscall=188 success=no exit=-22 a0=c00093c720 a1=c0008b6e70 a2=c00093c6f0 a3=25 items=0 ppid=1 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:11.144226 kernel: audit: type=1327 audit(1755046631.048:197): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:11.049000 audit[1888]: AVC avc: denied { mac_admin } for pid=1888 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:11.049000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:11.049000 audit[1888]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008ad680 a1=c0008b6e88 a2=c00093c7b0 a3=25 items=0 ppid=1 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:11.054000 audit[1900]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.054000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd27b15380 a2=0 a3=7ffd27b1536c items=0 ppid=1888 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:57:11.056000 audit[1901]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.056000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc20af9ed0 a2=0 a3=7ffc20af9ebc items=0 ppid=1888 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:57:11.093000 audit[1903]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.093000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca849e440 a2=0 a3=7ffca849e42c items=0 ppid=1888 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:57:11.139000 audit[1907]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.139000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc551c9340 a2=0 a3=7ffc551c932c items=0 ppid=1888 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:57:11.160493 kubelet[1888]: I0813 00:57:11.159903 1888 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:57:11.160493 kubelet[1888]: I0813 00:57:11.159932 1888 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:57:11.160493 kubelet[1888]: I0813 00:57:11.159959 1888 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:57:11.161000 audit[1910]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.161000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffefeaf14a0 a2=0 a3=7ffefeaf148c items=0 ppid=1888 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:57:11.163729 kubelet[1888]: I0813 00:57:11.162517 1888 policy_none.go:49] "None policy: Start" Aug 13 00:57:11.163809 kubelet[1888]: I0813 00:57:11.163798 1888 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:57:11.163868 kubelet[1888]: I0813 00:57:11.163826 1888 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:57:11.164211 kubelet[1888]: I0813 00:57:11.164172 1888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:57:11.165000 audit[1913]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:11.165000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffffe2e3710 a2=0 a3=7ffffe2e36fc items=0 ppid=1888 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:57:11.166433 kubelet[1888]: I0813 00:57:11.166346 1888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:57:11.166433 kubelet[1888]: I0813 00:57:11.166373 1888 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:57:11.166433 kubelet[1888]: I0813 00:57:11.166404 1888 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:57:11.166621 kubelet[1888]: E0813 00:57:11.166467 1888 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:57:11.168000 audit[1914]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.168000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0e8b2d10 a2=0 a3=7ffc0e8b2cfc items=0 ppid=1888 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:57:11.171869 kubelet[1888]: W0813 00:57:11.171832 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.171999 kubelet[1888]: E0813 00:57:11.171890 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:11.171999 kubelet[1888]: E0813 00:57:11.171842 1888 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" not found" Aug 13 00:57:11.172000 audit[1916]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.172000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7ae2c830 a2=0 a3=7ffc7ae2c81c items=0 ppid=1888 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:57:11.176000 audit[1917]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:11.176000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff362fcd50 a2=0 a3=7fff362fcd3c items=0 ppid=1888 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:57:11.179988 kubelet[1888]: I0813 00:57:11.179953 1888 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:57:11.179000 audit[1888]: AVC avc: denied { mac_admin } for pid=1888 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:11.179000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:11.179000 audit[1888]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b44b10 a1=c000e5dce0 a2=c000b44ae0 a3=25 items=0 ppid=1 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:11.180466 kubelet[1888]: I0813 00:57:11.180040 1888 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:57:11.180466 kubelet[1888]: I0813 00:57:11.180202 1888 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:57:11.180466 kubelet[1888]: I0813 00:57:11.180220 1888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:57:11.181000 audit[1915]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:11.181000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe31da2500 a2=0 a3=7ffe31da24ec items=0 ppid=1888 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:57:11.183584 kubelet[1888]: I0813 00:57:11.183545 1888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:57:11.184000 audit[1919]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:11.184000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff579f37e0 a2=0 a3=7fff579f37cc items=0 ppid=1888 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:57:11.188553 kubelet[1888]: E0813 00:57:11.188523 1888 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" not found" Aug 13 00:57:11.188000 audit[1920]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:11.188000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5f8364e0 a2=0 a3=7ffc5f8364cc items=0 ppid=1888 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:11.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:57:11.285489 kubelet[1888]: E0813 00:57:11.285425 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="400ms" Aug 13 00:57:11.286820 kubelet[1888]: I0813 00:57:11.286765 1888 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.287794 kubelet[1888]: E0813 00:57:11.287741 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.387872 kubelet[1888]: I0813 00:57:11.387762 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.387872 kubelet[1888]: I0813 00:57:11.387852 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5a425f2c256eb927c4d20610257a015-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"f5a425f2c256eb927c4d20610257a015\") " pod="kube-system/kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.388215 kubelet[1888]: I0813 00:57:11.387911 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.388215 kubelet[1888]: I0813 00:57:11.387937 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.388215 kubelet[1888]: I0813 00:57:11.387986 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.388215 kubelet[1888]: I0813 00:57:11.388052 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.489321 kubelet[1888]: I0813 00:57:11.489144 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.489321 kubelet[1888]: I0813 00:57:11.489220 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.492083 kubelet[1888]: I0813 00:57:11.492040 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.494745 kubelet[1888]: I0813 00:57:11.494693 1888 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.495169 kubelet[1888]: E0813 00:57:11.495102 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.590684 env[1335]: time="2025-08-13T00:57:11.590582257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:885bbddcd25466a07c09e203789be89a,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:11.600327 env[1335]: time="2025-08-13T00:57:11.600252372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:f5a425f2c256eb927c4d20610257a015,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:11.605000 env[1335]: time="2025-08-13T00:57:11.604943628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:91e9d003c04468891d7c852de23f7b78,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:11.686644 kubelet[1888]: E0813 00:57:11.686549 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="800ms" Aug 13 00:57:11.867896 kubelet[1888]: W0813 00:57:11.867705 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.867896 kubelet[1888]: E0813 00:57:11.867803 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:11.901089 kubelet[1888]: I0813 00:57:11.901039 1888 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.901572 kubelet[1888]: E0813 00:57:11.901516 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:11.933278 kubelet[1888]: W0813 00:57:11.933211 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:11.933486 kubelet[1888]: E0813 00:57:11.933296 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:12.233150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586605688.mount: Deactivated successfully. Aug 13 00:57:12.246385 env[1335]: time="2025-08-13T00:57:12.246264176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.248195 env[1335]: time="2025-08-13T00:57:12.248133232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.253523 env[1335]: time="2025-08-13T00:57:12.253460956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.255418 env[1335]: time="2025-08-13T00:57:12.255352940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.256772 env[1335]: time="2025-08-13T00:57:12.256719896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.259922 env[1335]: time="2025-08-13T00:57:12.259861931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.264044 env[1335]: time="2025-08-13T00:57:12.263973254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.267539 env[1335]: time="2025-08-13T00:57:12.267386364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.270862 env[1335]: time="2025-08-13T00:57:12.270813426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.274290 env[1335]: time="2025-08-13T00:57:12.274228581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.276852 env[1335]: time="2025-08-13T00:57:12.276802801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.278197 env[1335]: time="2025-08-13T00:57:12.278141306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:12.322904 env[1335]: time="2025-08-13T00:57:12.322007188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:12.322904 env[1335]: time="2025-08-13T00:57:12.322107474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:12.322904 env[1335]: time="2025-08-13T00:57:12.322132007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:12.322904 env[1335]: time="2025-08-13T00:57:12.322375148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/836cd485280daf206e5201bdf6729ca0e0df5a71e7ec593eaf877971ce618acd pid=1928 runtime=io.containerd.runc.v2 Aug 13 00:57:12.347253 env[1335]: time="2025-08-13T00:57:12.347093080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:12.347561 env[1335]: time="2025-08-13T00:57:12.347494806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:12.347801 env[1335]: time="2025-08-13T00:57:12.347739583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:12.348823 env[1335]: time="2025-08-13T00:57:12.348736895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d38a07c29f7362c89654c8316d9e60e1a7f295700b8813e397deca17346877af pid=1951 runtime=io.containerd.runc.v2 Aug 13 00:57:12.377634 env[1335]: time="2025-08-13T00:57:12.377376769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:12.377995 env[1335]: time="2025-08-13T00:57:12.377892534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:12.378233 env[1335]: time="2025-08-13T00:57:12.378182806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:12.378773 env[1335]: time="2025-08-13T00:57:12.378710639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdedb53c5f7ce35e38e87969351eab39c5f5d29055ea15805aa95d38be1a5c74 pid=1976 runtime=io.containerd.runc.v2 Aug 13 00:57:12.488484 kubelet[1888]: W0813 00:57:12.488151 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:12.488484 kubelet[1888]: E0813 00:57:12.488221 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:12.488484 kubelet[1888]: E0813 00:57:12.488324 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="1.6s" Aug 13 00:57:12.488484 kubelet[1888]: W0813 00:57:12.488366 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Aug 13 00:57:12.488484 kubelet[1888]: E0813 00:57:12.488454 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:57:12.508378 env[1335]: time="2025-08-13T00:57:12.508289554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:885bbddcd25466a07c09e203789be89a,Namespace:kube-system,Attempt:0,} returns sandbox id \"836cd485280daf206e5201bdf6729ca0e0df5a71e7ec593eaf877971ce618acd\"" Aug 13 00:57:12.512019 kubelet[1888]: E0813 00:57:12.511369 1888 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flat" Aug 13 00:57:12.514113 env[1335]: time="2025-08-13T00:57:12.514062713Z" level=info msg="CreateContainer within sandbox \"836cd485280daf206e5201bdf6729ca0e0df5a71e7ec593eaf877971ce618acd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:57:12.540783 env[1335]: time="2025-08-13T00:57:12.533718808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:f5a425f2c256eb927c4d20610257a015,Namespace:kube-system,Attempt:0,} returns sandbox id \"d38a07c29f7362c89654c8316d9e60e1a7f295700b8813e397deca17346877af\"" Aug 13 00:57:12.544461 kubelet[1888]: E0813 00:57:12.544408 1888 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-21291" Aug 13 00:57:12.547096 env[1335]: time="2025-08-13T00:57:12.546809964Z" level=info msg="CreateContainer within sandbox \"d38a07c29f7362c89654c8316d9e60e1a7f295700b8813e397deca17346877af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:57:12.554101 env[1335]: time="2025-08-13T00:57:12.554038127Z" level=info msg="CreateContainer within sandbox \"836cd485280daf206e5201bdf6729ca0e0df5a71e7ec593eaf877971ce618acd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ec9c88b32a239ba5f5488c743db6833f646aa24b2ebe8e44a053a83c9475f54\"" Aug 13 00:57:12.557635 env[1335]: time="2025-08-13T00:57:12.556049478Z" level=info msg="StartContainer for \"7ec9c88b32a239ba5f5488c743db6833f646aa24b2ebe8e44a053a83c9475f54\"" Aug 13 00:57:12.563977 env[1335]: time="2025-08-13T00:57:12.563920967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,Uid:91e9d003c04468891d7c852de23f7b78,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdedb53c5f7ce35e38e87969351eab39c5f5d29055ea15805aa95d38be1a5c74\"" Aug 13 00:57:12.566142 kubelet[1888]: E0813 00:57:12.566069 1888 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-21291" Aug 13 00:57:12.567807 env[1335]: time="2025-08-13T00:57:12.567761459Z" level=info msg="CreateContainer within sandbox \"fdedb53c5f7ce35e38e87969351eab39c5f5d29055ea15805aa95d38be1a5c74\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:57:12.584360 env[1335]: time="2025-08-13T00:57:12.584297957Z" level=info msg="CreateContainer within sandbox \"d38a07c29f7362c89654c8316d9e60e1a7f295700b8813e397deca17346877af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2b11da90d78282e143f916d30c1eb844e38a31c0f82778c1b54e1e819774e15c\"" Aug 13 00:57:12.585505 env[1335]: time="2025-08-13T00:57:12.585462512Z" level=info msg="StartContainer for \"2b11da90d78282e143f916d30c1eb844e38a31c0f82778c1b54e1e819774e15c\"" Aug 13 00:57:12.593228 env[1335]: time="2025-08-13T00:57:12.593156091Z" level=info msg="CreateContainer within sandbox \"fdedb53c5f7ce35e38e87969351eab39c5f5d29055ea15805aa95d38be1a5c74\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b83f9d1d67a4fb71e6089936b132a9fb8c1cb985bcc867851240fdff7ce37dca\"" Aug 13 00:57:12.594980 env[1335]: time="2025-08-13T00:57:12.594933540Z" level=info msg="StartContainer for \"b83f9d1d67a4fb71e6089936b132a9fb8c1cb985bcc867851240fdff7ce37dca\"" Aug 13 00:57:12.707454 kubelet[1888]: I0813 00:57:12.707397 1888 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:12.708168 kubelet[1888]: E0813 00:57:12.707991 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:12.727605 kubelet[1888]: E0813 00:57:12.727401 1888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal.185b2d9cbe7ce31e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,UID:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 00:57:11.024943902 +0000 UTC m=+0.506298299,LastTimestamp:2025-08-13 00:57:11.024943902 +0000 UTC m=+0.506298299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal,}" Aug 13 00:57:12.767456 env[1335]: time="2025-08-13T00:57:12.766456049Z" level=info msg="StartContainer for \"7ec9c88b32a239ba5f5488c743db6833f646aa24b2ebe8e44a053a83c9475f54\" returns successfully" Aug 13 00:57:12.799964 env[1335]: time="2025-08-13T00:57:12.799898890Z" level=info msg="StartContainer for \"2b11da90d78282e143f916d30c1eb844e38a31c0f82778c1b54e1e819774e15c\" returns successfully" Aug 13 00:57:12.820211 env[1335]: time="2025-08-13T00:57:12.820126006Z" level=info msg="StartContainer for \"b83f9d1d67a4fb71e6089936b132a9fb8c1cb985bcc867851240fdff7ce37dca\" returns successfully" Aug 13 00:57:14.314952 kubelet[1888]: I0813 00:57:14.314913 1888 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:16.634811 kubelet[1888]: E0813 00:57:16.634747 1888 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" not found" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:16.709952 kubelet[1888]: I0813 00:57:16.709875 1888 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:17.040239 kubelet[1888]: I0813 00:57:17.040090 1888 apiserver.go:52] "Watching apiserver" Aug 13 00:57:17.087955 kubelet[1888]: I0813 00:57:17.087895 1888 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:57:19.033838 systemd[1]: Reloading. Aug 13 00:57:19.186165 /usr/lib/systemd/system-generators/torcx-generator[2182]: time="2025-08-13T00:57:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:57:19.186878 /usr/lib/systemd/system-generators/torcx-generator[2182]: time="2025-08-13T00:57:19Z" level=info msg="torcx already run" Aug 13 00:57:19.302112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:57:19.302140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:57:19.335061 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:57:19.477795 systemd[1]: Stopping kubelet.service... Aug 13 00:57:19.499154 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:57:19.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:19.499940 systemd[1]: Stopped kubelet.service. Aug 13 00:57:19.503957 systemd[1]: Starting kubelet.service... Aug 13 00:57:19.507149 kernel: kauditd_printk_skb: 44 callbacks suppressed Aug 13 00:57:19.507241 kernel: audit: type=1131 audit(1755046639.499:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:19.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:19.831258 systemd[1]: Started kubelet.service. Aug 13 00:57:19.854647 kernel: audit: type=1130 audit(1755046639.831:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:19.948466 kubelet[2240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:57:19.948980 kubelet[2240]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:57:19.949048 kubelet[2240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:57:19.949215 kubelet[2240]: I0813 00:57:19.949184 2240 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:57:19.968503 kubelet[2240]: I0813 00:57:19.968433 2240 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:57:19.968503 kubelet[2240]: I0813 00:57:19.968477 2240 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:57:19.968964 kubelet[2240]: I0813 00:57:19.968926 2240 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:57:19.970698 kubelet[2240]: I0813 00:57:19.970662 2240 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:57:19.973669 kubelet[2240]: I0813 00:57:19.973637 2240 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:57:19.979313 kubelet[2240]: E0813 00:57:19.979273 2240 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:57:19.979502 kubelet[2240]: I0813 00:57:19.979484 2240 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:57:19.983993 kubelet[2240]: I0813 00:57:19.983967 2240 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:57:19.984808 kubelet[2240]: I0813 00:57:19.984786 2240 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:57:19.985135 kubelet[2240]: I0813 00:57:19.985096 2240 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:57:19.985501 kubelet[2240]: I0813 00:57:19.985216 2240 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:57:19.985714 kubelet[2240]: I0813 00:57:19.985697 2240 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:57:19.985797 kubelet[2240]: I0813 00:57:19.985785 2240 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:57:19.985889 kubelet[2240]: I0813 00:57:19.985877 2240 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:57:19.986093 kubelet[2240]: I0813 00:57:19.986081 2240 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:57:19.987082 kubelet[2240]: I0813 00:57:19.987060 2240 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:57:19.987359 kubelet[2240]: I0813 00:57:19.987341 2240 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:57:19.991720 kubelet[2240]: I0813 00:57:19.991679 2240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:57:19.994346 kubelet[2240]: I0813 00:57:19.994311 2240 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:57:19.995447 kubelet[2240]: I0813 00:57:19.995410 2240 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:57:19.997107 kubelet[2240]: I0813 00:57:19.997075 2240 server.go:1274] "Started kubelet" Aug 13 00:57:20.004000 audit[2240]: AVC avc: denied { mac_admin } for pid=2240 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:20.015856 kubelet[2240]: I0813 00:57:20.004785 2240 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:57:20.015856 kubelet[2240]: I0813 00:57:20.004862 2240 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:57:20.015856 kubelet[2240]: I0813 00:57:20.004979 2240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:57:20.016384 kubelet[2240]: I0813 00:57:20.016332 2240 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:57:20.018226 kubelet[2240]: I0813 00:57:20.018194 2240 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:57:20.026633 kernel: audit: type=1400 audit(1755046640.004:214): avc: denied { mac_admin } for pid=2240 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:20.031005 kubelet[2240]: I0813 00:57:20.018487 2240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:57:20.031391 kubelet[2240]: I0813 00:57:20.019064 2240 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:57:20.031802 kubelet[2240]: I0813 00:57:20.022702 2240 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:57:20.033166 kubelet[2240]: I0813 00:57:20.024582 2240 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:57:20.033464 kubelet[2240]: I0813 00:57:20.033449 2240 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:57:20.033655 kubelet[2240]: E0813 00:57:20.024978 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" not found" Aug 13 00:57:20.004000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:20.038403 kubelet[2240]: I0813 00:57:20.038375 2240 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:57:20.047624 kernel: audit: type=1401 audit(1755046640.004:214): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:20.004000 audit[2240]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ca6a80 a1=c000bfc6d8 a2=c000ca6a50 a3=25 items=0 ppid=1 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:20.097462 kubelet[2240]: I0813 00:57:20.089983 2240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:57:20.097462 kubelet[2240]: I0813 00:57:20.093494 2240 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:57:20.097462 kubelet[2240]: I0813 00:57:20.093512 2240 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:57:20.097686 kernel: audit: type=1300 audit(1755046640.004:214): arch=c000003e syscall=188 success=no exit=-22 a0=c000ca6a80 a1=c000bfc6d8 a2=c000ca6a50 a3=25 items=0 ppid=1 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:20.101805 kubelet[2240]: I0813 00:57:20.101722 2240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:57:20.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:20.134985 kubelet[2240]: I0813 00:57:20.118387 2240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:57:20.134985 kubelet[2240]: I0813 00:57:20.118422 2240 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:57:20.134985 kubelet[2240]: I0813 00:57:20.118448 2240 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:57:20.134985 kubelet[2240]: E0813 00:57:20.118514 2240 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:57:20.136631 kernel: audit: type=1327 audit(1755046640.004:214): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:20.004000 audit[2240]: AVC avc: denied { mac_admin } for pid=2240 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:20.157919 kubelet[2240]: E0813 00:57:20.141692 2240 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:57:20.158635 kernel: audit: type=1400 audit(1755046640.004:215): avc: denied { mac_admin } for pid=2240 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:20.162081 kernel: audit: type=1401 audit(1755046640.004:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:20.004000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:20.004000 audit[2240]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b871c0 a1=c000bfc6f0 a2=c000ca6b10 a3=25 items=0 ppid=1 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:20.215120 kernel: audit: type=1300 audit(1755046640.004:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000b871c0 a1=c000bfc6f0 a2=c000ca6b10 a3=25 items=0 ppid=1 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:20.215306 kernel: audit: type=1327 audit(1755046640.004:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:20.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:20.218720 kubelet[2240]: E0813 00:57:20.218672 2240 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:57:20.280691 kubelet[2240]: I0813 00:57:20.280656 2240 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:57:20.280911 kubelet[2240]: I0813 00:57:20.280890 2240 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:57:20.280997 kubelet[2240]: I0813 00:57:20.280985 2240 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:57:20.281256 kubelet[2240]: I0813 00:57:20.281237 2240 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:57:20.281373 kubelet[2240]: I0813 00:57:20.281345 2240 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:57:20.281446 kubelet[2240]: I0813 00:57:20.281435 2240 policy_none.go:49] "None policy: Start" Aug 13 00:57:20.282318 kubelet[2240]: I0813 00:57:20.282296 2240 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:57:20.282498 kubelet[2240]: I0813 00:57:20.282475 2240 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:57:20.282808 kubelet[2240]: I0813 00:57:20.282795 2240 state_mem.go:75] "Updated machine memory state" Aug 13 00:57:20.284513 kubelet[2240]: I0813 00:57:20.284485 2240 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:57:20.284000 audit[2240]: AVC avc: denied { mac_admin } for pid=2240 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:57:20.284000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:57:20.284000 audit[2240]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000dca000 a1=c000bfd908 a2=c000d55fb0 a3=25 items=0 ppid=1 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:20.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:57:20.285167 kubelet[2240]: I0813 00:57:20.285142 2240 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:57:20.285609 kubelet[2240]: I0813 00:57:20.285539 2240 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:57:20.287988 kubelet[2240]: I0813 00:57:20.287902 2240 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:57:20.289059 kubelet[2240]: I0813 00:57:20.289040 2240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:57:20.420815 kubelet[2240]: I0813 00:57:20.420774 2240 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.435251 kubelet[2240]: W0813 00:57:20.434999 2240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 00:57:20.437719 kubelet[2240]: W0813 00:57:20.437126 2240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 00:57:20.440665 kubelet[2240]: I0813 00:57:20.440628 2240 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.440835 kubelet[2240]: I0813 00:57:20.440730 2240 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.444102 kubelet[2240]: W0813 00:57:20.443184 2240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 00:57:20.452002 kubelet[2240]: I0813 00:57:20.451942 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452002 kubelet[2240]: I0813 00:57:20.452007 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452243 kubelet[2240]: I0813 00:57:20.452044 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452243 kubelet[2240]: I0813 00:57:20.452071 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452243 kubelet[2240]: I0813 00:57:20.452104 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452243 kubelet[2240]: I0813 00:57:20.452159 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452486 kubelet[2240]: I0813 00:57:20.452194 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5a425f2c256eb927c4d20610257a015-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"f5a425f2c256eb927c4d20610257a015\") " pod="kube-system/kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452486 kubelet[2240]: I0813 00:57:20.452229 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91e9d003c04468891d7c852de23f7b78-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"91e9d003c04468891d7c852de23f7b78\") " pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.452486 kubelet[2240]: I0813 00:57:20.452258 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/885bbddcd25466a07c09e203789be89a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" (UID: \"885bbddcd25466a07c09e203789be89a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:57:20.993925 kubelet[2240]: I0813 00:57:20.993856 2240 apiserver.go:52] "Watching apiserver" Aug 13 00:57:21.034641 kubelet[2240]: I0813 00:57:21.033824 2240 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:57:21.202516 kubelet[2240]: I0813 00:57:21.202438 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" podStartSLOduration=1.202385584 podStartE2EDuration="1.202385584s" podCreationTimestamp="2025-08-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:21.200074468 +0000 UTC m=+1.338580827" watchObservedRunningTime="2025-08-13 00:57:21.202385584 +0000 UTC m=+1.340891871" Aug 13 00:57:21.233662 kubelet[2240]: I0813 00:57:21.233487 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" podStartSLOduration=1.233460682 podStartE2EDuration="1.233460682s" podCreationTimestamp="2025-08-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:21.222063134 +0000 UTC m=+1.360569423" watchObservedRunningTime="2025-08-13 00:57:21.233460682 +0000 UTC m=+1.371966979" Aug 13 00:57:21.250104 kubelet[2240]: I0813 00:57:21.249934 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" podStartSLOduration=1.249910572 podStartE2EDuration="1.249910572s" podCreationTimestamp="2025-08-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:21.234058893 +0000 UTC m=+1.372565182" watchObservedRunningTime="2025-08-13 00:57:21.249910572 +0000 UTC m=+1.388416862" Aug 13 00:57:22.323806 update_engine[1324]: I0813 00:57:22.323715 1324 update_attempter.cc:509] Updating boot flags... Aug 13 00:57:23.432237 kubelet[2240]: I0813 00:57:23.432184 2240 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:57:23.433430 env[1335]: time="2025-08-13T00:57:23.433317828Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:57:23.434047 kubelet[2240]: I0813 00:57:23.433792 2240 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:57:24.280882 kubelet[2240]: I0813 00:57:24.280820 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54d74e48-ca03-4a03-a04e-1012f7231962-lib-modules\") pod \"kube-proxy-hcpxx\" (UID: \"54d74e48-ca03-4a03-a04e-1012f7231962\") " pod="kube-system/kube-proxy-hcpxx" Aug 13 00:57:24.281135 kubelet[2240]: I0813 00:57:24.280955 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgzpx\" (UniqueName: \"kubernetes.io/projected/54d74e48-ca03-4a03-a04e-1012f7231962-kube-api-access-dgzpx\") pod \"kube-proxy-hcpxx\" (UID: \"54d74e48-ca03-4a03-a04e-1012f7231962\") " pod="kube-system/kube-proxy-hcpxx" Aug 13 00:57:24.281135 kubelet[2240]: I0813 00:57:24.281007 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54d74e48-ca03-4a03-a04e-1012f7231962-kube-proxy\") pod \"kube-proxy-hcpxx\" (UID: \"54d74e48-ca03-4a03-a04e-1012f7231962\") " pod="kube-system/kube-proxy-hcpxx" Aug 13 00:57:24.281135 kubelet[2240]: I0813 00:57:24.281031 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54d74e48-ca03-4a03-a04e-1012f7231962-xtables-lock\") pod \"kube-proxy-hcpxx\" (UID: \"54d74e48-ca03-4a03-a04e-1012f7231962\") " pod="kube-system/kube-proxy-hcpxx" Aug 13 00:57:24.391533 kubelet[2240]: I0813 00:57:24.391484 2240 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:57:24.483693 kubelet[2240]: I0813 00:57:24.483633 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64ba2308-f31a-42cd-b7cd-cb3d16b1f13e-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-vgll5\" (UID: \"64ba2308-f31a-42cd-b7cd-cb3d16b1f13e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vgll5" Aug 13 00:57:24.484332 kubelet[2240]: I0813 00:57:24.484302 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fffb\" (UniqueName: \"kubernetes.io/projected/64ba2308-f31a-42cd-b7cd-cb3d16b1f13e-kube-api-access-4fffb\") pod \"tigera-operator-5bf8dfcb4-vgll5\" (UID: \"64ba2308-f31a-42cd-b7cd-cb3d16b1f13e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vgll5" Aug 13 00:57:24.537451 env[1335]: time="2025-08-13T00:57:24.537310640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcpxx,Uid:54d74e48-ca03-4a03-a04e-1012f7231962,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:24.571070 env[1335]: time="2025-08-13T00:57:24.570922160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:24.571310 env[1335]: time="2025-08-13T00:57:24.571029108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:24.571310 env[1335]: time="2025-08-13T00:57:24.571088458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:24.571708 env[1335]: time="2025-08-13T00:57:24.571634956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36e4f501f0d94deed486114ba8cffd9fb97feada9a3b50d5546f40d09d943f54 pid=2307 runtime=io.containerd.runc.v2 Aug 13 00:57:24.652400 env[1335]: time="2025-08-13T00:57:24.652338879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcpxx,Uid:54d74e48-ca03-4a03-a04e-1012f7231962,Namespace:kube-system,Attempt:0,} returns sandbox id \"36e4f501f0d94deed486114ba8cffd9fb97feada9a3b50d5546f40d09d943f54\"" Aug 13 00:57:24.657952 env[1335]: time="2025-08-13T00:57:24.657896003Z" level=info msg="CreateContainer within sandbox \"36e4f501f0d94deed486114ba8cffd9fb97feada9a3b50d5546f40d09d943f54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:57:24.682862 env[1335]: time="2025-08-13T00:57:24.682780528Z" level=info msg="CreateContainer within sandbox \"36e4f501f0d94deed486114ba8cffd9fb97feada9a3b50d5546f40d09d943f54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ac68f028018faedb90a05abcaa6973626371ff88326a59e84018dc05d48d0d5\"" Aug 13 00:57:24.684816 env[1335]: time="2025-08-13T00:57:24.684762836Z" level=info msg="StartContainer for \"2ac68f028018faedb90a05abcaa6973626371ff88326a59e84018dc05d48d0d5\"" Aug 13 00:57:24.754539 env[1335]: time="2025-08-13T00:57:24.754461251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vgll5,Uid:64ba2308-f31a-42cd-b7cd-cb3d16b1f13e,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:57:24.781150 env[1335]: time="2025-08-13T00:57:24.781076763Z" level=info msg="StartContainer for \"2ac68f028018faedb90a05abcaa6973626371ff88326a59e84018dc05d48d0d5\" returns successfully" Aug 13 00:57:24.804023 env[1335]: time="2025-08-13T00:57:24.803302441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:24.804366 env[1335]: time="2025-08-13T00:57:24.804317207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:24.804746 env[1335]: time="2025-08-13T00:57:24.804666067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:24.805495 env[1335]: time="2025-08-13T00:57:24.805412451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba53423b80ca0e253ce875c0dddf9d9df62b7a019198b37606063b2c5a6d7137 pid=2382 runtime=io.containerd.runc.v2 Aug 13 00:57:24.919176 env[1335]: time="2025-08-13T00:57:24.919108105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vgll5,Uid:64ba2308-f31a-42cd-b7cd-cb3d16b1f13e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ba53423b80ca0e253ce875c0dddf9d9df62b7a019198b37606063b2c5a6d7137\"" Aug 13 00:57:24.925159 env[1335]: time="2025-08-13T00:57:24.925104516Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:57:25.000000 audit[2452]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.015676 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:57:25.015861 kernel: audit: type=1325 audit(1755046645.000:217): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.000000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd31e3a560 a2=0 a3=7ffd31e3a54c items=0 ppid=2359 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:57:25.075610 kernel: audit: type=1300 audit(1755046645.000:217): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd31e3a560 a2=0 a3=7ffd31e3a54c items=0 ppid=2359 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.075817 kernel: audit: type=1327 audit(1755046645.000:217): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:57:25.075871 kernel: audit: type=1325 audit(1755046645.002:218): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.002000 audit[2453]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.093661 kernel: audit: type=1300 audit(1755046645.002:218): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe977806c0 a2=0 a3=7ffe977806ac items=0 ppid=2359 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.002000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe977806c0 a2=0 a3=7ffe977806ac items=0 ppid=2359 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:57:25.142652 kernel: audit: type=1327 audit(1755046645.002:218): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:57:25.160082 kernel: audit: type=1325 audit(1755046645.006:219): table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.006000 audit[2451]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.192207 kernel: audit: type=1300 audit(1755046645.006:219): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf6e7810 a2=0 a3=0 items=0 ppid=2359 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.006000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf6e7810 a2=0 a3=0 items=0 ppid=2359 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:57:25.210676 kernel: audit: type=1327 audit(1755046645.006:219): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:57:25.210785 kubelet[2240]: I0813 00:57:25.202899 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcpxx" podStartSLOduration=1.20286627 podStartE2EDuration="1.20286627s" podCreationTimestamp="2025-08-13 00:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:25.202802905 +0000 UTC m=+5.341309195" watchObservedRunningTime="2025-08-13 00:57:25.20286627 +0000 UTC m=+5.341372559" Aug 13 00:57:25.008000 audit[2455]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.008000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc86c4d550 a2=0 a3=7ffc86c4d53c items=0 ppid=2359 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:57:25.010000 audit[2456]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.010000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6727ec10 a2=0 a3=7fff6727ebfc items=0 ppid=2359 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:57:25.023000 audit[2454]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.229696 kernel: audit: type=1325 audit(1755046645.008:220): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.023000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1ea35bb0 a2=0 a3=7ffd1ea35b9c items=0 ppid=2359 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:57:25.125000 audit[2457]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.125000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd59016e30 a2=0 a3=7ffd59016e1c items=0 ppid=2359 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:57:25.141000 audit[2459]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.141000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff14b5ab20 a2=0 a3=7fff14b5ab0c items=0 ppid=2359 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:57:25.151000 audit[2462]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.151000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc52121100 a2=0 a3=7ffc521210ec items=0 ppid=2359 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:57:25.151000 audit[2463]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.151000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe55ab4710 a2=0 a3=7ffe55ab46fc items=0 ppid=2359 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:57:25.159000 audit[2465]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.159000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecbe28e20 a2=0 a3=7ffecbe28e0c items=0 ppid=2359 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:57:25.159000 audit[2466]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.159000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9d9e3390 a2=0 a3=7ffe9d9e337c items=0 ppid=2359 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:57:25.169000 audit[2468]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.169000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed29f2fb0 a2=0 a3=7ffed29f2f9c items=0 ppid=2359 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:57:25.197000 audit[2471]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.197000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd75f06700 a2=0 a3=7ffd75f066ec items=0 ppid=2359 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:57:25.211000 audit[2472]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.211000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc26278460 a2=0 a3=7ffc2627844c items=0 ppid=2359 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:57:25.234000 audit[2474]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.234000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf3f3c2f0 a2=0 a3=7ffdf3f3c2dc items=0 ppid=2359 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:57:25.237000 audit[2475]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.237000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8dfbbc20 a2=0 a3=7fff8dfbbc0c items=0 ppid=2359 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:57:25.242000 audit[2477]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.242000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8a669e40 a2=0 a3=7ffe8a669e2c items=0 ppid=2359 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:57:25.248000 audit[2480]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.248000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb129f200 a2=0 a3=7ffcb129f1ec items=0 ppid=2359 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:57:25.255000 audit[2483]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.255000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde39bfb00 a2=0 a3=7ffde39bfaec items=0 ppid=2359 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:57:25.257000 audit[2484]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.257000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd29fe800 a2=0 a3=7fffd29fe7ec items=0 ppid=2359 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:57:25.263000 audit[2486]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.263000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe46ea71a0 a2=0 a3=7ffe46ea718c items=0 ppid=2359 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:57:25.269000 audit[2489]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.269000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4a1a8f40 a2=0 a3=7ffc4a1a8f2c items=0 ppid=2359 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:57:25.271000 audit[2490]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.271000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed714f720 a2=0 a3=7ffed714f70c items=0 ppid=2359 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:57:25.275000 audit[2492]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:57:25.275000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffbbb853b0 a2=0 a3=7fffbbb8539c items=0 ppid=2359 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:57:25.315000 audit[2498]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:25.315000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc737fa870 a2=0 a3=7ffc737fa85c items=0 ppid=2359 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:25.329000 audit[2498]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:25.329000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc737fa870 a2=0 a3=7ffc737fa85c items=0 ppid=2359 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:25.332000 audit[2503]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.332000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3b19c890 a2=0 a3=7ffe3b19c87c items=0 ppid=2359 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:57:25.338000 audit[2505]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.338000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe0ce7fb30 a2=0 a3=7ffe0ce7fb1c items=0 ppid=2359 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:57:25.344000 audit[2508]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.344000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeac1b1e20 a2=0 a3=7ffeac1b1e0c items=0 ppid=2359 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:57:25.346000 audit[2509]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.346000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7eb7ff30 a2=0 a3=7ffc7eb7ff1c items=0 ppid=2359 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:57:25.351000 audit[2511]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.351000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd39289dd0 a2=0 a3=7ffd39289dbc items=0 ppid=2359 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:57:25.353000 audit[2512]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.353000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3e6cb5a0 a2=0 a3=7fff3e6cb58c items=0 ppid=2359 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:57:25.358000 audit[2514]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.358000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca17a0d10 a2=0 a3=7ffca17a0cfc items=0 ppid=2359 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:57:25.363000 audit[2517]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.363000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdeea22750 a2=0 a3=7ffdeea2273c items=0 ppid=2359 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:57:25.366000 audit[2518]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.366000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff65e9a9e0 a2=0 a3=7fff65e9a9cc items=0 ppid=2359 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:57:25.370000 audit[2520]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.370000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf262ef40 a2=0 a3=7ffcf262ef2c items=0 ppid=2359 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:57:25.372000 audit[2521]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.372000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca0da39f0 a2=0 a3=7ffca0da39dc items=0 ppid=2359 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:57:25.380000 audit[2523]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.380000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa83acec0 a2=0 a3=7fffa83aceac items=0 ppid=2359 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:57:25.388000 audit[2526]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.388000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4e98b060 a2=0 a3=7ffc4e98b04c items=0 ppid=2359 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:57:25.394000 audit[2529]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.394000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe587e7700 a2=0 a3=7ffe587e76ec items=0 ppid=2359 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:57:25.405664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009544589.mount: Deactivated successfully. Aug 13 00:57:25.406000 audit[2530]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.406000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffef203ed0 a2=0 a3=7fffef203ebc items=0 ppid=2359 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:57:25.412000 audit[2532]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.412000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd5c341090 a2=0 a3=7ffd5c34107c items=0 ppid=2359 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:57:25.419000 audit[2535]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.419000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc58540610 a2=0 a3=7ffc585405fc items=0 ppid=2359 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:57:25.421000 audit[2536]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.421000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecfe5e640 a2=0 a3=7ffecfe5e62c items=0 ppid=2359 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:57:25.427000 audit[2538]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.427000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff61b55ca0 a2=0 a3=7fff61b55c8c items=0 ppid=2359 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:57:25.430000 audit[2539]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.430000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca5392780 a2=0 a3=7ffca539276c items=0 ppid=2359 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:57:25.434000 audit[2541]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.434000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe18493f70 a2=0 a3=7ffe18493f5c items=0 ppid=2359 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:57:25.441000 audit[2544]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:57:25.441000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff58ff5420 a2=0 a3=7fff58ff540c items=0 ppid=2359 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:57:25.447000 audit[2546]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:57:25.447000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffca251f140 a2=0 a3=7ffca251f12c items=0 ppid=2359 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.447000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:25.448000 audit[2546]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:57:25.448000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffca251f140 a2=0 a3=7ffca251f12c items=0 ppid=2359 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:25.448000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:26.437056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43307851.mount: Deactivated successfully. Aug 13 00:57:28.065902 env[1335]: time="2025-08-13T00:57:28.065825611Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:28.069660 env[1335]: time="2025-08-13T00:57:28.069553755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:28.072625 env[1335]: time="2025-08-13T00:57:28.072552099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:28.074956 env[1335]: time="2025-08-13T00:57:28.074909768Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:28.075836 env[1335]: time="2025-08-13T00:57:28.075772542Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:57:28.082645 env[1335]: time="2025-08-13T00:57:28.082569560Z" level=info msg="CreateContainer within sandbox \"ba53423b80ca0e253ce875c0dddf9d9df62b7a019198b37606063b2c5a6d7137\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:57:28.108927 env[1335]: time="2025-08-13T00:57:28.108850281Z" level=info msg="CreateContainer within sandbox \"ba53423b80ca0e253ce875c0dddf9d9df62b7a019198b37606063b2c5a6d7137\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fcd4f9a438cda6ded13901ab2d865e5fcdd596d8a89e780ce51b6db5351b00c9\"" Aug 13 00:57:28.111811 env[1335]: time="2025-08-13T00:57:28.109794338Z" level=info msg="StartContainer for \"fcd4f9a438cda6ded13901ab2d865e5fcdd596d8a89e780ce51b6db5351b00c9\"" Aug 13 00:57:28.210857 env[1335]: time="2025-08-13T00:57:28.210786204Z" level=info msg="StartContainer for \"fcd4f9a438cda6ded13901ab2d865e5fcdd596d8a89e780ce51b6db5351b00c9\" returns successfully" Aug 13 00:57:29.213332 kubelet[2240]: I0813 00:57:29.213258 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-vgll5" podStartSLOduration=2.059657358 podStartE2EDuration="5.213228263s" podCreationTimestamp="2025-08-13 00:57:24 +0000 UTC" firstStartedPulling="2025-08-13 00:57:24.923800215 +0000 UTC m=+5.062306483" lastFinishedPulling="2025-08-13 00:57:28.077371121 +0000 UTC m=+8.215877388" observedRunningTime="2025-08-13 00:57:29.21279859 +0000 UTC m=+9.351304881" watchObservedRunningTime="2025-08-13 00:57:29.213228263 +0000 UTC m=+9.351734552" Aug 13 00:57:36.279214 sudo[1604]: pam_unix(sudo:session): session closed for user root Aug 13 00:57:36.311202 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:57:36.311478 kernel: audit: type=1106 audit(1755046656.279:268): pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:57:36.279000 audit[1604]: USER_END pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:57:36.313000 audit[1604]: CRED_DISP pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:57:36.346635 kernel: audit: type=1104 audit(1755046656.313:269): pid=1604 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:57:36.359141 sshd[1600]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:36.406998 kernel: audit: type=1106 audit(1755046656.370:270): pid=1600 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:36.370000 audit[1600]: USER_END pid=1600 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:36.405208 systemd[1]: sshd@6-10.128.0.76:22-139.178.68.195:54378.service: Deactivated successfully. Aug 13 00:57:36.410459 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:57:36.411471 systemd-logind[1323]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:57:36.416936 systemd-logind[1323]: Removed session 7. Aug 13 00:57:36.370000 audit[1600]: CRED_DISP pid=1600 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:36.462617 kernel: audit: type=1104 audit(1755046656.370:271): pid=1600 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:36.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.76:22-139.178.68.195:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:36.494716 kernel: audit: type=1131 audit(1755046656.405:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.76:22-139.178.68.195:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:37.640000 audit[2629]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.658702 kernel: audit: type=1325 audit(1755046657.640:273): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.640000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd422e64a0 a2=0 a3=7ffd422e648c items=0 ppid=2359 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.703624 kernel: audit: type=1300 audit(1755046657.640:273): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd422e64a0 a2=0 a3=7ffd422e648c items=0 ppid=2359 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.754628 kernel: audit: type=1327 audit(1755046657.640:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.663000 audit[2629]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.772627 kernel: audit: type=1325 audit(1755046657.663:274): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.663000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd422e64a0 a2=0 a3=0 items=0 ppid=2359 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.808065 kernel: audit: type=1300 audit(1755046657.663:274): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd422e64a0 a2=0 a3=0 items=0 ppid=2359 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.713000 audit[2631]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.713000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffccb08e990 a2=0 a3=7ffccb08e97c items=0 ppid=2359 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.733000 audit[2631]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.733000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccb08e990 a2=0 a3=0 items=0 ppid=2359 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.215000 audit[2633]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.225319 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:57:42.225529 kernel: audit: type=1325 audit(1755046662.215:277): table=filter:93 family=2 entries=16 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.215000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbbcddf60 a2=0 a3=7ffdbbcddf4c items=0 ppid=2359 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.275775 kernel: audit: type=1300 audit(1755046662.215:277): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbbcddf60 a2=0 a3=7ffdbbcddf4c items=0 ppid=2359 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.292630 kernel: audit: type=1327 audit(1755046662.215:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.279000 audit[2633]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.310637 kernel: audit: type=1325 audit(1755046662.279:278): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.279000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbbcddf60 a2=0 a3=0 items=0 ppid=2359 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.345843 kernel: audit: type=1300 audit(1755046662.279:278): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbbcddf60 a2=0 a3=0 items=0 ppid=2359 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.376518 kernel: audit: type=1327 audit(1755046662.279:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.376739 kernel: audit: type=1325 audit(1755046662.370:279): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.370000 audit[2635]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.370000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe51564f0 a2=0 a3=7fffe51564dc items=0 ppid=2359 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.445133 kernel: audit: type=1300 audit(1755046662.370:279): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe51564f0 a2=0 a3=7fffe51564dc items=0 ppid=2359 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.445325 kernel: audit: type=1327 audit(1755046662.370:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.394000 audit[2635]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.475623 kernel: audit: type=1325 audit(1755046662.394:280): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:42.394000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe51564f0 a2=0 a3=0 items=0 ppid=2359 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:42.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:42.778175 kubelet[2240]: I0813 00:57:42.778023 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgcx\" (UniqueName: \"kubernetes.io/projected/1ca9cf10-65b1-4533-aae6-0bece8dc9feb-kube-api-access-jmgcx\") pod \"calico-typha-67d846c7d9-nxqmc\" (UID: \"1ca9cf10-65b1-4533-aae6-0bece8dc9feb\") " pod="calico-system/calico-typha-67d846c7d9-nxqmc" Aug 13 00:57:42.778175 kubelet[2240]: I0813 00:57:42.778103 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca9cf10-65b1-4533-aae6-0bece8dc9feb-tigera-ca-bundle\") pod \"calico-typha-67d846c7d9-nxqmc\" (UID: \"1ca9cf10-65b1-4533-aae6-0bece8dc9feb\") " pod="calico-system/calico-typha-67d846c7d9-nxqmc" Aug 13 00:57:42.778175 kubelet[2240]: I0813 00:57:42.778132 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ca9cf10-65b1-4533-aae6-0bece8dc9feb-typha-certs\") pod \"calico-typha-67d846c7d9-nxqmc\" (UID: \"1ca9cf10-65b1-4533-aae6-0bece8dc9feb\") " pod="calico-system/calico-typha-67d846c7d9-nxqmc" Aug 13 00:57:42.980551 kubelet[2240]: I0813 00:57:42.980470 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-var-lib-calico\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980551 kubelet[2240]: I0813 00:57:42.980527 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-lib-modules\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980551 kubelet[2240]: I0813 00:57:42.980558 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47711698-865c-441b-b450-7281daa79808-tigera-ca-bundle\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980871 kubelet[2240]: I0813 00:57:42.980584 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-xtables-lock\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980871 kubelet[2240]: I0813 00:57:42.980638 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-cni-bin-dir\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980871 kubelet[2240]: I0813 00:57:42.980664 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-flexvol-driver-host\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980871 kubelet[2240]: I0813 00:57:42.980691 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-var-run-calico\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.980871 kubelet[2240]: I0813 00:57:42.980716 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldc4\" (UniqueName: \"kubernetes.io/projected/47711698-865c-441b-b450-7281daa79808-kube-api-access-jldc4\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.981151 kubelet[2240]: I0813 00:57:42.980740 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-cni-log-dir\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.981151 kubelet[2240]: I0813 00:57:42.980763 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47711698-865c-441b-b450-7281daa79808-node-certs\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.981151 kubelet[2240]: I0813 00:57:42.980790 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-cni-net-dir\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:42.981151 kubelet[2240]: I0813 00:57:42.980820 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47711698-865c-441b-b450-7281daa79808-policysync\") pod \"calico-node-hgvg8\" (UID: \"47711698-865c-441b-b450-7281daa79808\") " pod="calico-system/calico-node-hgvg8" Aug 13 00:57:43.015540 env[1335]: time="2025-08-13T00:57:43.015036485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d846c7d9-nxqmc,Uid:1ca9cf10-65b1-4533-aae6-0bece8dc9feb,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:43.051584 env[1335]: time="2025-08-13T00:57:43.051384344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:43.051584 env[1335]: time="2025-08-13T00:57:43.051510780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:43.051824 env[1335]: time="2025-08-13T00:57:43.051567574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:43.051904 env[1335]: time="2025-08-13T00:57:43.051839954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be678abdfcc6ac1d9719529a43e70e75c121b4467a66f6d01c477280045d8ca9 pid=2644 runtime=io.containerd.runc.v2 Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.089742 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.090755 kubelet[2240]: W0813 00:57:43.089773 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.089806 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.090183 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.090755 kubelet[2240]: W0813 00:57:43.090198 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.090218 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.090691 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.090755 kubelet[2240]: W0813 00:57:43.090708 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.090755 kubelet[2240]: E0813 00:57:43.090738 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.091371 kubelet[2240]: E0813 00:57:43.091119 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.091371 kubelet[2240]: W0813 00:57:43.091133 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.091371 kubelet[2240]: E0813 00:57:43.091158 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.091564 kubelet[2240]: E0813 00:57:43.091471 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.091564 kubelet[2240]: W0813 00:57:43.091484 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.091564 kubelet[2240]: E0813 00:57:43.091506 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.092369 kubelet[2240]: E0813 00:57:43.091829 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.092369 kubelet[2240]: W0813 00:57:43.091843 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.092369 kubelet[2240]: E0813 00:57:43.091862 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.092369 kubelet[2240]: E0813 00:57:43.092137 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.092369 kubelet[2240]: W0813 00:57:43.092151 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.092369 kubelet[2240]: E0813 00:57:43.092171 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.092719 kubelet[2240]: E0813 00:57:43.092565 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.092719 kubelet[2240]: W0813 00:57:43.092579 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.092719 kubelet[2240]: E0813 00:57:43.092614 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.120259 kubelet[2240]: E0813 00:57:43.120217 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.120259 kubelet[2240]: W0813 00:57:43.120250 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.120495 kubelet[2240]: E0813 00:57:43.120283 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.132119 kubelet[2240]: E0813 00:57:43.132078 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.132119 kubelet[2240]: W0813 00:57:43.132112 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.132415 kubelet[2240]: E0813 00:57:43.132145 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.139420 kubelet[2240]: E0813 00:57:43.139295 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:43.153381 kubelet[2240]: E0813 00:57:43.153340 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.153381 kubelet[2240]: W0813 00:57:43.153375 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.153634 kubelet[2240]: E0813 00:57:43.153407 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.153834 kubelet[2240]: E0813 00:57:43.153806 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.153834 kubelet[2240]: W0813 00:57:43.153834 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.154002 kubelet[2240]: E0813 00:57:43.153857 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.154198 kubelet[2240]: E0813 00:57:43.154166 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.154198 kubelet[2240]: W0813 00:57:43.154187 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.154376 kubelet[2240]: E0813 00:57:43.154207 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.154539 kubelet[2240]: E0813 00:57:43.154508 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.154539 kubelet[2240]: W0813 00:57:43.154528 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.154752 kubelet[2240]: E0813 00:57:43.154547 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.154901 kubelet[2240]: E0813 00:57:43.154878 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.154901 kubelet[2240]: W0813 00:57:43.154900 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.155050 kubelet[2240]: E0813 00:57:43.154919 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.155225 kubelet[2240]: E0813 00:57:43.155202 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.155225 kubelet[2240]: W0813 00:57:43.155224 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.155374 kubelet[2240]: E0813 00:57:43.155241 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.155633 kubelet[2240]: E0813 00:57:43.155584 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.155633 kubelet[2240]: W0813 00:57:43.155626 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.155799 kubelet[2240]: E0813 00:57:43.155647 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.156640 kubelet[2240]: E0813 00:57:43.155935 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.156640 kubelet[2240]: W0813 00:57:43.155951 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.156640 kubelet[2240]: E0813 00:57:43.155967 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.156640 kubelet[2240]: E0813 00:57:43.156262 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.156640 kubelet[2240]: W0813 00:57:43.156276 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.156640 kubelet[2240]: E0813 00:57:43.156294 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.156640 kubelet[2240]: E0813 00:57:43.156577 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.159667 kubelet[2240]: W0813 00:57:43.159627 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.159667 kubelet[2240]: E0813 00:57:43.159666 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.160032 kubelet[2240]: E0813 00:57:43.160008 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.160032 kubelet[2240]: W0813 00:57:43.160032 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.160186 kubelet[2240]: E0813 00:57:43.160051 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.160370 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.161622 kubelet[2240]: W0813 00:57:43.160389 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.160406 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.160734 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.161622 kubelet[2240]: W0813 00:57:43.160748 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.160765 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.161043 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.161622 kubelet[2240]: W0813 00:57:43.161057 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.161073 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.161622 kubelet[2240]: E0813 00:57:43.161357 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.162201 kubelet[2240]: W0813 00:57:43.161372 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.162201 kubelet[2240]: E0813 00:57:43.161387 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.162201 kubelet[2240]: E0813 00:57:43.161714 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.162201 kubelet[2240]: W0813 00:57:43.161729 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.162201 kubelet[2240]: E0813 00:57:43.161745 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.162201 kubelet[2240]: E0813 00:57:43.162032 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.162201 kubelet[2240]: W0813 00:57:43.162046 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.162201 kubelet[2240]: E0813 00:57:43.162060 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.163633 kubelet[2240]: E0813 00:57:43.163585 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.163633 kubelet[2240]: W0813 00:57:43.163631 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.163825 kubelet[2240]: E0813 00:57:43.163652 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.165637 kubelet[2240]: E0813 00:57:43.163962 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.165637 kubelet[2240]: W0813 00:57:43.163979 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.165637 kubelet[2240]: E0813 00:57:43.163996 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.165637 kubelet[2240]: E0813 00:57:43.164287 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.165637 kubelet[2240]: W0813 00:57:43.164302 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.165637 kubelet[2240]: E0813 00:57:43.164317 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.189109 env[1335]: time="2025-08-13T00:57:43.189028018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hgvg8,Uid:47711698-865c-441b-b450-7281daa79808,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:43.200145 kubelet[2240]: E0813 00:57:43.200092 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.200348 kubelet[2240]: W0813 00:57:43.200142 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.200348 kubelet[2240]: E0813 00:57:43.200190 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.201732 kubelet[2240]: I0813 00:57:43.201682 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/281890f3-f0b5-4757-b7db-b03ab8faf735-registration-dir\") pod \"csi-node-driver-hv7wr\" (UID: \"281890f3-f0b5-4757-b7db-b03ab8faf735\") " pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:43.205240 kubelet[2240]: E0813 00:57:43.205186 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.205240 kubelet[2240]: W0813 00:57:43.205238 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.205447 kubelet[2240]: E0813 00:57:43.205284 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.205447 kubelet[2240]: I0813 00:57:43.205343 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/281890f3-f0b5-4757-b7db-b03ab8faf735-varrun\") pod \"csi-node-driver-hv7wr\" (UID: \"281890f3-f0b5-4757-b7db-b03ab8faf735\") " pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:43.229627 kubelet[2240]: E0813 00:57:43.229398 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.229627 kubelet[2240]: W0813 00:57:43.229436 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.229627 kubelet[2240]: E0813 00:57:43.229480 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.229627 kubelet[2240]: I0813 00:57:43.229541 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/281890f3-f0b5-4757-b7db-b03ab8faf735-kubelet-dir\") pod \"csi-node-driver-hv7wr\" (UID: \"281890f3-f0b5-4757-b7db-b03ab8faf735\") " pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:43.241631 env[1335]: time="2025-08-13T00:57:43.237741836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:43.241631 env[1335]: time="2025-08-13T00:57:43.238111459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:43.241631 env[1335]: time="2025-08-13T00:57:43.238222244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:43.241631 env[1335]: time="2025-08-13T00:57:43.238781228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61 pid=2723 runtime=io.containerd.runc.v2 Aug 13 00:57:43.245627 kubelet[2240]: E0813 00:57:43.245347 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.245627 kubelet[2240]: W0813 00:57:43.245413 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.246123 kubelet[2240]: E0813 00:57:43.246096 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.246247 kubelet[2240]: W0813 00:57:43.246152 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.253632 kubelet[2240]: E0813 00:57:43.252054 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.253632 kubelet[2240]: I0813 00:57:43.252160 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/281890f3-f0b5-4757-b7db-b03ab8faf735-socket-dir\") pod \"csi-node-driver-hv7wr\" (UID: \"281890f3-f0b5-4757-b7db-b03ab8faf735\") " pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:43.253632 kubelet[2240]: E0813 00:57:43.252248 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.253632 kubelet[2240]: E0813 00:57:43.252973 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.253632 kubelet[2240]: W0813 00:57:43.252997 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.253632 kubelet[2240]: E0813 00:57:43.253250 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.255377 kubelet[2240]: E0813 00:57:43.254174 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.255377 kubelet[2240]: W0813 00:57:43.254230 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.255377 kubelet[2240]: E0813 00:57:43.254304 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.255736 kubelet[2240]: E0813 00:57:43.255709 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.255808 kubelet[2240]: W0813 00:57:43.255740 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.258153 kubelet[2240]: E0813 00:57:43.255885 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.258153 kubelet[2240]: I0813 00:57:43.255963 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jdq6\" (UniqueName: \"kubernetes.io/projected/281890f3-f0b5-4757-b7db-b03ab8faf735-kube-api-access-7jdq6\") pod \"csi-node-driver-hv7wr\" (UID: \"281890f3-f0b5-4757-b7db-b03ab8faf735\") " pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:43.258153 kubelet[2240]: E0813 00:57:43.256843 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.258153 kubelet[2240]: W0813 00:57:43.257069 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.258794 kubelet[2240]: E0813 00:57:43.258721 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.260500 kubelet[2240]: E0813 00:57:43.260428 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.260655 kubelet[2240]: W0813 00:57:43.260500 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.260655 kubelet[2240]: E0813 00:57:43.260566 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.262603 kubelet[2240]: E0813 00:57:43.262546 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.262711 kubelet[2240]: W0813 00:57:43.262616 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.262711 kubelet[2240]: E0813 00:57:43.262643 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.263881 kubelet[2240]: E0813 00:57:43.263816 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.263881 kubelet[2240]: W0813 00:57:43.263871 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.264057 kubelet[2240]: E0813 00:57:43.263901 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.264505 kubelet[2240]: E0813 00:57:43.264478 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.264652 kubelet[2240]: W0813 00:57:43.264527 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.264652 kubelet[2240]: E0813 00:57:43.264552 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.267301 kubelet[2240]: E0813 00:57:43.267100 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.267301 kubelet[2240]: W0813 00:57:43.267126 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.267301 kubelet[2240]: E0813 00:57:43.267153 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.269583 kubelet[2240]: E0813 00:57:43.269537 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.269737 kubelet[2240]: W0813 00:57:43.269615 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.269737 kubelet[2240]: E0813 00:57:43.269644 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.364084 kubelet[2240]: E0813 00:57:43.363971 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.364084 kubelet[2240]: W0813 00:57:43.364004 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.364084 kubelet[2240]: E0813 00:57:43.364034 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.364429 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368037 kubelet[2240]: W0813 00:57:43.364446 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.364470 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.364838 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368037 kubelet[2240]: W0813 00:57:43.364852 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.364872 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.365200 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368037 kubelet[2240]: W0813 00:57:43.365211 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.365231 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368037 kubelet[2240]: E0813 00:57:43.365563 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368708 kubelet[2240]: W0813 00:57:43.365576 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.365707 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.365980 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368708 kubelet[2240]: W0813 00:57:43.365993 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.366104 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.366281 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368708 kubelet[2240]: W0813 00:57:43.366292 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.366396 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.368708 kubelet[2240]: E0813 00:57:43.366582 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.368708 kubelet[2240]: W0813 00:57:43.366619 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.366773 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.367570 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369285 kubelet[2240]: W0813 00:57:43.367609 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.367737 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.367934 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369285 kubelet[2240]: W0813 00:57:43.367947 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.368061 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.368257 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369285 kubelet[2240]: W0813 00:57:43.368270 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369285 kubelet[2240]: E0813 00:57:43.368382 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.368578 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369840 kubelet[2240]: W0813 00:57:43.368606 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.368720 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.368926 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369840 kubelet[2240]: W0813 00:57:43.368940 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.369053 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.369232 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.369840 kubelet[2240]: W0813 00:57:43.369244 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.369357 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.369840 kubelet[2240]: E0813 00:57:43.369581 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.370448 kubelet[2240]: W0813 00:57:43.369608 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.370448 kubelet[2240]: E0813 00:57:43.369773 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.370448 kubelet[2240]: E0813 00:57:43.369983 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.370448 kubelet[2240]: W0813 00:57:43.369997 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.370448 kubelet[2240]: E0813 00:57:43.370136 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.372685 kubelet[2240]: E0813 00:57:43.372070 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.372685 kubelet[2240]: W0813 00:57:43.372206 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.372685 kubelet[2240]: E0813 00:57:43.372296 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.372966 kubelet[2240]: E0813 00:57:43.372838 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.372966 kubelet[2240]: W0813 00:57:43.372854 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.373076 kubelet[2240]: E0813 00:57:43.372983 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.374635 kubelet[2240]: E0813 00:57:43.373430 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.374635 kubelet[2240]: W0813 00:57:43.373447 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.374635 kubelet[2240]: E0813 00:57:43.373506 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.374635 kubelet[2240]: E0813 00:57:43.374212 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.374635 kubelet[2240]: W0813 00:57:43.374227 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.374635 kubelet[2240]: E0813 00:57:43.374337 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.375101 kubelet[2240]: E0813 00:57:43.374708 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.375101 kubelet[2240]: W0813 00:57:43.374725 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.375101 kubelet[2240]: E0813 00:57:43.374883 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.375284 kubelet[2240]: E0813 00:57:43.375137 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.375284 kubelet[2240]: W0813 00:57:43.375151 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.375406 kubelet[2240]: E0813 00:57:43.375299 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.375505 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.380182 kubelet[2240]: W0813 00:57:43.375530 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.375677 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.377387 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.380182 kubelet[2240]: W0813 00:57:43.377405 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.377581 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.377838 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.380182 kubelet[2240]: W0813 00:57:43.377852 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.380182 kubelet[2240]: E0813 00:57:43.377871 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.420096 kubelet[2240]: E0813 00:57:43.420056 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:43.420096 kubelet[2240]: W0813 00:57:43.420089 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:43.420343 kubelet[2240]: E0813 00:57:43.420124 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:43.441176 env[1335]: time="2025-08-13T00:57:43.441119858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d846c7d9-nxqmc,Uid:1ca9cf10-65b1-4533-aae6-0bece8dc9feb,Namespace:calico-system,Attempt:0,} returns sandbox id \"be678abdfcc6ac1d9719529a43e70e75c121b4467a66f6d01c477280045d8ca9\"" Aug 13 00:57:43.444102 env[1335]: time="2025-08-13T00:57:43.444055688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:57:43.470838 env[1335]: time="2025-08-13T00:57:43.470780083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hgvg8,Uid:47711698-865c-441b-b450-7281daa79808,Namespace:calico-system,Attempt:0,} returns sandbox id \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\"" Aug 13 00:57:43.475000 audit[2803]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:43.475000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff84b46ec0 a2=0 a3=7fff84b46eac items=0 ppid=2359 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:43.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:43.481000 audit[2803]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:43.481000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff84b46ec0 a2=0 a3=0 items=0 ppid=2359 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:43.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:44.533764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293879889.mount: Deactivated successfully. Aug 13 00:57:45.119674 kubelet[2240]: E0813 00:57:45.119614 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:45.822789 env[1335]: time="2025-08-13T00:57:45.822716162Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:45.826627 env[1335]: time="2025-08-13T00:57:45.826542309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:45.829257 env[1335]: time="2025-08-13T00:57:45.829207146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:45.831835 env[1335]: time="2025-08-13T00:57:45.831775948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:45.832625 env[1335]: time="2025-08-13T00:57:45.832546369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:57:45.837220 env[1335]: time="2025-08-13T00:57:45.835280765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:57:45.849795 env[1335]: time="2025-08-13T00:57:45.849738137Z" level=info msg="CreateContainer within sandbox \"be678abdfcc6ac1d9719529a43e70e75c121b4467a66f6d01c477280045d8ca9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:57:45.874316 env[1335]: time="2025-08-13T00:57:45.874204726Z" level=info msg="CreateContainer within sandbox \"be678abdfcc6ac1d9719529a43e70e75c121b4467a66f6d01c477280045d8ca9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8263c48d133dc3234802abf0966e8270a966a607b43b8995c4e141aa753ebe04\"" Aug 13 00:57:45.876410 env[1335]: time="2025-08-13T00:57:45.875791947Z" level=info msg="StartContainer for \"8263c48d133dc3234802abf0966e8270a966a607b43b8995c4e141aa753ebe04\"" Aug 13 00:57:45.997392 env[1335]: time="2025-08-13T00:57:45.997316765Z" level=info msg="StartContainer for \"8263c48d133dc3234802abf0966e8270a966a607b43b8995c4e141aa753ebe04\" returns successfully" Aug 13 00:57:46.289000 kubelet[2240]: E0813 00:57:46.288949 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.289000 kubelet[2240]: W0813 00:57:46.289006 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.289831 kubelet[2240]: E0813 00:57:46.289041 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.289831 kubelet[2240]: E0813 00:57:46.289549 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.289831 kubelet[2240]: W0813 00:57:46.289580 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.289831 kubelet[2240]: E0813 00:57:46.289629 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.290070 kubelet[2240]: E0813 00:57:46.290004 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.290070 kubelet[2240]: W0813 00:57:46.290018 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.290070 kubelet[2240]: E0813 00:57:46.290041 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.290415 kubelet[2240]: E0813 00:57:46.290385 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.290518 kubelet[2240]: W0813 00:57:46.290419 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.290518 kubelet[2240]: E0813 00:57:46.290440 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.290847 kubelet[2240]: E0813 00:57:46.290819 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.290847 kubelet[2240]: W0813 00:57:46.290849 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.290999 kubelet[2240]: E0813 00:57:46.290869 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.291255 kubelet[2240]: E0813 00:57:46.291231 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.291357 kubelet[2240]: W0813 00:57:46.291253 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.291357 kubelet[2240]: E0813 00:57:46.291283 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.291629 kubelet[2240]: E0813 00:57:46.291605 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.291721 kubelet[2240]: W0813 00:57:46.291632 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.291721 kubelet[2240]: E0813 00:57:46.291650 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.291985 kubelet[2240]: E0813 00:57:46.291961 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.291985 kubelet[2240]: W0813 00:57:46.291983 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.292133 kubelet[2240]: E0813 00:57:46.292003 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.292503 kubelet[2240]: E0813 00:57:46.292464 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.292503 kubelet[2240]: W0813 00:57:46.292495 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.292767 kubelet[2240]: E0813 00:57:46.292518 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.292916 kubelet[2240]: E0813 00:57:46.292893 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.293067 kubelet[2240]: W0813 00:57:46.292916 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.293067 kubelet[2240]: E0813 00:57:46.292935 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.293322 kubelet[2240]: E0813 00:57:46.293298 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.293419 kubelet[2240]: W0813 00:57:46.293322 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.293419 kubelet[2240]: E0813 00:57:46.293341 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.293680 kubelet[2240]: E0813 00:57:46.293654 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.293680 kubelet[2240]: W0813 00:57:46.293670 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.293833 kubelet[2240]: E0813 00:57:46.293689 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.294507 kubelet[2240]: E0813 00:57:46.294071 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.294507 kubelet[2240]: W0813 00:57:46.294089 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.294507 kubelet[2240]: E0813 00:57:46.294115 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.294507 kubelet[2240]: E0813 00:57:46.294406 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.294507 kubelet[2240]: W0813 00:57:46.294419 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.294507 kubelet[2240]: E0813 00:57:46.294434 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.294911 kubelet[2240]: E0813 00:57:46.294773 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.294911 kubelet[2240]: W0813 00:57:46.294788 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.294911 kubelet[2240]: E0813 00:57:46.294805 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.295618 kubelet[2240]: E0813 00:57:46.295182 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.295618 kubelet[2240]: W0813 00:57:46.295199 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.295618 kubelet[2240]: E0813 00:57:46.295214 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.295618 kubelet[2240]: E0813 00:57:46.295552 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.295618 kubelet[2240]: W0813 00:57:46.295566 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.295618 kubelet[2240]: E0813 00:57:46.295605 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.295982 kubelet[2240]: E0813 00:57:46.295941 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.295982 kubelet[2240]: W0813 00:57:46.295959 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.295982 kubelet[2240]: E0813 00:57:46.295979 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.300813 kubelet[2240]: E0813 00:57:46.300779 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.300813 kubelet[2240]: W0813 00:57:46.300810 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.301027 kubelet[2240]: E0813 00:57:46.300843 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.301845 kubelet[2240]: E0813 00:57:46.301813 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.301845 kubelet[2240]: W0813 00:57:46.301843 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.302186 kubelet[2240]: E0813 00:57:46.302058 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.302270 kubelet[2240]: E0813 00:57:46.302239 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.302270 kubelet[2240]: W0813 00:57:46.302256 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.302451 kubelet[2240]: E0813 00:57:46.302424 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.302812 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.311417 kubelet[2240]: W0813 00:57:46.302827 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.302967 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.303189 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.311417 kubelet[2240]: W0813 00:57:46.303202 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.303227 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.303743 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.311417 kubelet[2240]: W0813 00:57:46.303758 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.303876 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.311417 kubelet[2240]: E0813 00:57:46.304690 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312056 kubelet[2240]: W0813 00:57:46.304708 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.304836 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.305039 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312056 kubelet[2240]: W0813 00:57:46.305054 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.305183 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.306508 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312056 kubelet[2240]: W0813 00:57:46.306524 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.306547 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312056 kubelet[2240]: E0813 00:57:46.306863 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312056 kubelet[2240]: W0813 00:57:46.306876 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.306898 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.307269 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312553 kubelet[2240]: W0813 00:57:46.307284 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.307405 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.309803 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312553 kubelet[2240]: W0813 00:57:46.309821 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.309848 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.310234 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.312553 kubelet[2240]: W0813 00:57:46.310257 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.312553 kubelet[2240]: E0813 00:57:46.310283 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.313119 kubelet[2240]: E0813 00:57:46.310960 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.313119 kubelet[2240]: W0813 00:57:46.310974 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.313119 kubelet[2240]: E0813 00:57:46.311088 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.313119 kubelet[2240]: E0813 00:57:46.311295 2240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:57:46.313119 kubelet[2240]: W0813 00:57:46.311307 2240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:57:46.313119 kubelet[2240]: E0813 00:57:46.311321 2240 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:57:46.818709 env[1335]: time="2025-08-13T00:57:46.818626564Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:46.821183 env[1335]: time="2025-08-13T00:57:46.821118872Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:46.823743 env[1335]: time="2025-08-13T00:57:46.823695244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:46.827503 env[1335]: time="2025-08-13T00:57:46.827450634Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:46.828290 env[1335]: time="2025-08-13T00:57:46.828235541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:57:46.833436 env[1335]: time="2025-08-13T00:57:46.833384204Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:57:46.860371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530398835.mount: Deactivated successfully. Aug 13 00:57:46.865807 env[1335]: time="2025-08-13T00:57:46.865712696Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a\"" Aug 13 00:57:46.869552 env[1335]: time="2025-08-13T00:57:46.869490773Z" level=info msg="StartContainer for \"d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a\"" Aug 13 00:57:46.989249 env[1335]: time="2025-08-13T00:57:46.989168089Z" level=info msg="StartContainer for \"d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a\" returns successfully" Aug 13 00:57:47.063865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a-rootfs.mount: Deactivated successfully. Aug 13 00:57:47.119658 kubelet[2240]: E0813 00:57:47.119518 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:47.276775 kubelet[2240]: I0813 00:57:47.276720 2240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:57:47.304315 kubelet[2240]: I0813 00:57:47.304205 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67d846c7d9-nxqmc" podStartSLOduration=2.91290814 podStartE2EDuration="5.304168323s" podCreationTimestamp="2025-08-13 00:57:42 +0000 UTC" firstStartedPulling="2025-08-13 00:57:43.443365342 +0000 UTC m=+23.581871608" lastFinishedPulling="2025-08-13 00:57:45.834625507 +0000 UTC m=+25.973131791" observedRunningTime="2025-08-13 00:57:46.306278273 +0000 UTC m=+26.444784563" watchObservedRunningTime="2025-08-13 00:57:47.304168323 +0000 UTC m=+27.442674657" Aug 13 00:57:47.772711 env[1335]: time="2025-08-13T00:57:47.772562115Z" level=info msg="shim disconnected" id=d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a Aug 13 00:57:47.772711 env[1335]: time="2025-08-13T00:57:47.772683118Z" level=warning msg="cleaning up after shim disconnected" id=d02774207dd53720dac9e8768c51a9368c996418a2e0b92afebcb1d404e9501a namespace=k8s.io Aug 13 00:57:47.772711 env[1335]: time="2025-08-13T00:57:47.772722623Z" level=info msg="cleaning up dead shim" Aug 13 00:57:47.787885 env[1335]: time="2025-08-13T00:57:47.787792170Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2938 runtime=io.containerd.runc.v2\n" Aug 13 00:57:48.285663 env[1335]: time="2025-08-13T00:57:48.284544870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:57:49.120193 kubelet[2240]: E0813 00:57:49.120047 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:51.121913 kubelet[2240]: E0813 00:57:51.121828 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:51.773348 env[1335]: time="2025-08-13T00:57:51.773253179Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:51.776454 env[1335]: time="2025-08-13T00:57:51.776400378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:51.779076 env[1335]: time="2025-08-13T00:57:51.779028374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:51.781435 env[1335]: time="2025-08-13T00:57:51.781390057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:57:51.782254 env[1335]: time="2025-08-13T00:57:51.782202852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:57:51.787564 env[1335]: time="2025-08-13T00:57:51.787506396Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:57:51.816275 env[1335]: time="2025-08-13T00:57:51.816203234Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed\"" Aug 13 00:57:51.818746 env[1335]: time="2025-08-13T00:57:51.817399655Z" level=info msg="StartContainer for \"387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed\"" Aug 13 00:57:51.872852 systemd[1]: run-containerd-runc-k8s.io-387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed-runc.jyYfdN.mount: Deactivated successfully. Aug 13 00:57:51.938684 env[1335]: time="2025-08-13T00:57:51.938563125Z" level=info msg="StartContainer for \"387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed\" returns successfully" Aug 13 00:57:53.037075 env[1335]: time="2025-08-13T00:57:53.036532155Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:57:53.076111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed-rootfs.mount: Deactivated successfully. Aug 13 00:57:53.098354 kubelet[2240]: I0813 00:57:53.096139 2240 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:57:53.140645 env[1335]: time="2025-08-13T00:57:53.135945635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hv7wr,Uid:281890f3-f0b5-4757-b7db-b03ab8faf735,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:53.265554 kubelet[2240]: I0813 00:57:53.265477 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwqj\" (UniqueName: \"kubernetes.io/projected/42c2f2da-5a83-4b40-aec1-478c8ca60301-kube-api-access-fpwqj\") pod \"coredns-7c65d6cfc9-f5mjv\" (UID: \"42c2f2da-5a83-4b40-aec1-478c8ca60301\") " pod="kube-system/coredns-7c65d6cfc9-f5mjv" Aug 13 00:57:53.266690 kubelet[2240]: I0813 00:57:53.266641 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wrr\" (UniqueName: \"kubernetes.io/projected/2bb39f28-f779-40ce-a3ee-b95479595e66-kube-api-access-x8wrr\") pod \"calico-apiserver-5d7cc8c448-5tbcd\" (UID: \"2bb39f28-f779-40ce-a3ee-b95479595e66\") " pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" Aug 13 00:57:53.266982 kubelet[2240]: I0813 00:57:53.266953 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvgm\" (UniqueName: \"kubernetes.io/projected/2b8de9e1-b570-4daf-8356-82b085f5759f-kube-api-access-5xvgm\") pod \"coredns-7c65d6cfc9-mbnvb\" (UID: \"2b8de9e1-b570-4daf-8356-82b085f5759f\") " pod="kube-system/coredns-7c65d6cfc9-mbnvb" Aug 13 00:57:53.267172 kubelet[2240]: I0813 00:57:53.267145 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmss\" (UniqueName: \"kubernetes.io/projected/3a1beee6-8c8b-44a5-9d7c-8a8072355c14-kube-api-access-2nmss\") pod \"calico-apiserver-5d7cc8c448-zvxvr\" (UID: \"3a1beee6-8c8b-44a5-9d7c-8a8072355c14\") " pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" Aug 13 00:57:53.267327 kubelet[2240]: I0813 00:57:53.267305 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b8de9e1-b570-4daf-8356-82b085f5759f-config-volume\") pod \"coredns-7c65d6cfc9-mbnvb\" (UID: \"2b8de9e1-b570-4daf-8356-82b085f5759f\") " pod="kube-system/coredns-7c65d6cfc9-mbnvb" Aug 13 00:57:53.267481 kubelet[2240]: I0813 00:57:53.267449 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e00a54c4-ec09-455a-86b9-9b9e86402f95-tigera-ca-bundle\") pod \"calico-kube-controllers-c744b486-wkm2q\" (UID: \"e00a54c4-ec09-455a-86b9-9b9e86402f95\") " pod="calico-system/calico-kube-controllers-c744b486-wkm2q" Aug 13 00:57:53.267637 kubelet[2240]: I0813 00:57:53.267617 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2bb39f28-f779-40ce-a3ee-b95479595e66-calico-apiserver-certs\") pod \"calico-apiserver-5d7cc8c448-5tbcd\" (UID: \"2bb39f28-f779-40ce-a3ee-b95479595e66\") " pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" Aug 13 00:57:53.267787 kubelet[2240]: I0813 00:57:53.267766 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a1beee6-8c8b-44a5-9d7c-8a8072355c14-calico-apiserver-certs\") pod \"calico-apiserver-5d7cc8c448-zvxvr\" (UID: \"3a1beee6-8c8b-44a5-9d7c-8a8072355c14\") " pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" Aug 13 00:57:53.267977 kubelet[2240]: I0813 00:57:53.267950 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c2f2da-5a83-4b40-aec1-478c8ca60301-config-volume\") pod \"coredns-7c65d6cfc9-f5mjv\" (UID: \"42c2f2da-5a83-4b40-aec1-478c8ca60301\") " pod="kube-system/coredns-7c65d6cfc9-f5mjv" Aug 13 00:57:53.268150 kubelet[2240]: I0813 00:57:53.268126 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbvd\" (UniqueName: \"kubernetes.io/projected/e00a54c4-ec09-455a-86b9-9b9e86402f95-kube-api-access-6hbvd\") pod \"calico-kube-controllers-c744b486-wkm2q\" (UID: \"e00a54c4-ec09-455a-86b9-9b9e86402f95\") " pod="calico-system/calico-kube-controllers-c744b486-wkm2q" Aug 13 00:57:53.375936 kubelet[2240]: I0813 00:57:53.375715 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5bt\" (UniqueName: \"kubernetes.io/projected/8da3e8d7-b831-4716-b890-6a89d4b7984d-kube-api-access-6r5bt\") pod \"goldmane-58fd7646b9-cpcml\" (UID: \"8da3e8d7-b831-4716-b890-6a89d4b7984d\") " pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:53.387274 kubelet[2240]: I0813 00:57:53.387208 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-backend-key-pair\") pod \"whisker-5654d7f6ff-28dlk\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " pod="calico-system/whisker-5654d7f6ff-28dlk" Aug 13 00:57:53.387553 kubelet[2240]: I0813 00:57:53.387343 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da3e8d7-b831-4716-b890-6a89d4b7984d-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-cpcml\" (UID: \"8da3e8d7-b831-4716-b890-6a89d4b7984d\") " pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:53.387553 kubelet[2240]: I0813 00:57:53.387472 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8da3e8d7-b831-4716-b890-6a89d4b7984d-config\") pod \"goldmane-58fd7646b9-cpcml\" (UID: \"8da3e8d7-b831-4716-b890-6a89d4b7984d\") " pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:53.387553 kubelet[2240]: I0813 00:57:53.387503 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjmt\" (UniqueName: \"kubernetes.io/projected/e4be5576-b47c-4b87-a843-c6ed19671979-kube-api-access-cnjmt\") pod \"whisker-5654d7f6ff-28dlk\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " pod="calico-system/whisker-5654d7f6ff-28dlk" Aug 13 00:57:53.387553 kubelet[2240]: I0813 00:57:53.387535 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8da3e8d7-b831-4716-b890-6a89d4b7984d-goldmane-key-pair\") pod \"goldmane-58fd7646b9-cpcml\" (UID: \"8da3e8d7-b831-4716-b890-6a89d4b7984d\") " pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:53.387816 kubelet[2240]: I0813 00:57:53.387562 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-ca-bundle\") pod \"whisker-5654d7f6ff-28dlk\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " pod="calico-system/whisker-5654d7f6ff-28dlk" Aug 13 00:57:53.500843 env[1335]: time="2025-08-13T00:57:53.500763553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-5tbcd,Uid:2bb39f28-f779-40ce-a3ee-b95479595e66,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:57:53.643508 env[1335]: time="2025-08-13T00:57:53.643305093Z" level=info msg="shim disconnected" id=387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed Aug 13 00:57:53.643508 env[1335]: time="2025-08-13T00:57:53.643385674Z" level=warning msg="cleaning up after shim disconnected" id=387d2ba3a15305b3e41b3317fc497d1118096fc9b7ef4900653070dc8accb2ed namespace=k8s.io Aug 13 00:57:53.643508 env[1335]: time="2025-08-13T00:57:53.643402255Z" level=info msg="cleaning up dead shim" Aug 13 00:57:53.671039 env[1335]: time="2025-08-13T00:57:53.670956591Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3013 runtime=io.containerd.runc.v2\n" Aug 13 00:57:53.773045 env[1335]: time="2025-08-13T00:57:53.772959860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5mjv,Uid:42c2f2da-5a83-4b40-aec1-478c8ca60301,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:53.787206 env[1335]: time="2025-08-13T00:57:53.787089088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mbnvb,Uid:2b8de9e1-b570-4daf-8356-82b085f5759f,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:53.799373 env[1335]: time="2025-08-13T00:57:53.799296731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-zvxvr,Uid:3a1beee6-8c8b-44a5-9d7c-8a8072355c14,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:57:53.810869 env[1335]: time="2025-08-13T00:57:53.810798095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c744b486-wkm2q,Uid:e00a54c4-ec09-455a-86b9-9b9e86402f95,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:53.811621 env[1335]: time="2025-08-13T00:57:53.810793706Z" level=error msg="Failed to destroy network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.812466 env[1335]: time="2025-08-13T00:57:53.812401913Z" level=error msg="encountered an error cleaning up failed sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.812863 env[1335]: time="2025-08-13T00:57:53.812799227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hv7wr,Uid:281890f3-f0b5-4757-b7db-b03ab8faf735,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.815914 kubelet[2240]: E0813 00:57:53.813372 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.815914 kubelet[2240]: E0813 00:57:53.813498 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:53.815914 kubelet[2240]: E0813 00:57:53.813539 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hv7wr" Aug 13 00:57:53.816671 kubelet[2240]: E0813 00:57:53.813659 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hv7wr_calico-system(281890f3-f0b5-4757-b7db-b03ab8faf735)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hv7wr_calico-system(281890f3-f0b5-4757-b7db-b03ab8faf735)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:53.823608 env[1335]: time="2025-08-13T00:57:53.823533056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpcml,Uid:8da3e8d7-b831-4716-b890-6a89d4b7984d,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:53.824252 env[1335]: time="2025-08-13T00:57:53.823545089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5654d7f6ff-28dlk,Uid:e4be5576-b47c-4b87-a843-c6ed19671979,Namespace:calico-system,Attempt:0,}" Aug 13 00:57:53.972800 env[1335]: time="2025-08-13T00:57:53.971383257Z" level=error msg="Failed to destroy network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.980780 env[1335]: time="2025-08-13T00:57:53.980650578Z" level=error msg="encountered an error cleaning up failed sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.981046 env[1335]: time="2025-08-13T00:57:53.980833312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-5tbcd,Uid:2bb39f28-f779-40ce-a3ee-b95479595e66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.984907 kubelet[2240]: E0813 00:57:53.981500 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:53.984907 kubelet[2240]: E0813 00:57:53.981727 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" Aug 13 00:57:53.984907 kubelet[2240]: E0813 00:57:53.981787 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" Aug 13 00:57:53.985287 kubelet[2240]: E0813 00:57:53.981890 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7cc8c448-5tbcd_calico-apiserver(2bb39f28-f779-40ce-a3ee-b95479595e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7cc8c448-5tbcd_calico-apiserver(2bb39f28-f779-40ce-a3ee-b95479595e66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" podUID="2bb39f28-f779-40ce-a3ee-b95479595e66" Aug 13 00:57:54.287064 env[1335]: time="2025-08-13T00:57:54.286046468Z" level=error msg="Failed to destroy network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.292929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9-shm.mount: Deactivated successfully. Aug 13 00:57:54.296972 env[1335]: time="2025-08-13T00:57:54.296904365Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.297229 env[1335]: time="2025-08-13T00:57:54.297171967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5mjv,Uid:42c2f2da-5a83-4b40-aec1-478c8ca60301,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.298431 kubelet[2240]: E0813 00:57:54.297714 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.298431 kubelet[2240]: E0813 00:57:54.297833 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f5mjv" Aug 13 00:57:54.298431 kubelet[2240]: E0813 00:57:54.297866 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f5mjv" Aug 13 00:57:54.299102 kubelet[2240]: E0813 00:57:54.297963 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f5mjv_kube-system(42c2f2da-5a83-4b40-aec1-478c8ca60301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f5mjv_kube-system(42c2f2da-5a83-4b40-aec1-478c8ca60301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f5mjv" podUID="42c2f2da-5a83-4b40-aec1-478c8ca60301" Aug 13 00:57:54.311277 kubelet[2240]: I0813 00:57:54.309842 2240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:57:54.367995 env[1335]: time="2025-08-13T00:57:54.365158153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:57:54.382634 kubelet[2240]: I0813 00:57:54.382476 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:57:54.387960 env[1335]: time="2025-08-13T00:57:54.387893320Z" level=info msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" Aug 13 00:57:54.394314 kubelet[2240]: I0813 00:57:54.392960 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:57:54.400850 env[1335]: time="2025-08-13T00:57:54.399698547Z" level=info msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" Aug 13 00:57:54.427010 kernel: kauditd_printk_skb: 8 callbacks suppressed Aug 13 00:57:54.427221 kernel: audit: type=1325 audit(1755046674.403:283): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:54.403000 audit[3241]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:54.427465 env[1335]: time="2025-08-13T00:57:54.407722737Z" level=info msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" Aug 13 00:57:54.427568 kubelet[2240]: I0813 00:57:54.403408 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:57:54.403000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc894b7f10 a2=0 a3=7ffc894b7efc items=0 ppid=2359 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:54.470819 kernel: audit: type=1300 audit(1755046674.403:283): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc894b7f10 a2=0 a3=7ffc894b7efc items=0 ppid=2359 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:54.481891 env[1335]: time="2025-08-13T00:57:54.481813497Z" level=error msg="Failed to destroy network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.491404 env[1335]: time="2025-08-13T00:57:54.491315770Z" level=error msg="encountered an error cleaning up failed sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.491717 env[1335]: time="2025-08-13T00:57:54.491653205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-zvxvr,Uid:3a1beee6-8c8b-44a5-9d7c-8a8072355c14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:54.508533 kubelet[2240]: E0813 00:57:54.492161 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.508533 kubelet[2240]: E0813 00:57:54.492230 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" Aug 13 00:57:54.508533 kubelet[2240]: E0813 00:57:54.492302 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" Aug 13 00:57:54.508947 kernel: audit: type=1327 audit(1755046674.403:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:54.492235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7-shm.mount: Deactivated successfully. Aug 13 00:57:54.509156 kubelet[2240]: E0813 00:57:54.492366 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7cc8c448-zvxvr_calico-apiserver(3a1beee6-8c8b-44a5-9d7c-8a8072355c14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7cc8c448-zvxvr_calico-apiserver(3a1beee6-8c8b-44a5-9d7c-8a8072355c14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" podUID="3a1beee6-8c8b-44a5-9d7c-8a8072355c14" Aug 13 00:57:54.511683 env[1335]: time="2025-08-13T00:57:54.511558920Z" level=error msg="Failed to destroy network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.512401 env[1335]: time="2025-08-13T00:57:54.512340146Z" level=error msg="encountered an error cleaning up failed sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.512664 env[1335]: time="2025-08-13T00:57:54.512605409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mbnvb,Uid:2b8de9e1-b570-4daf-8356-82b085f5759f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.456000 audit[3241]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:54.531187 kubelet[2240]: E0813 00:57:54.527857 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.531187 kubelet[2240]: E0813 00:57:54.527947 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mbnvb" Aug 13 00:57:54.531187 kubelet[2240]: E0813 00:57:54.528024 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mbnvb" Aug 13 00:57:54.522793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8-shm.mount: Deactivated successfully. Aug 13 00:57:54.531543 kubelet[2240]: E0813 00:57:54.528099 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mbnvb_kube-system(2b8de9e1-b570-4daf-8356-82b085f5759f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mbnvb_kube-system(2b8de9e1-b570-4daf-8356-82b085f5759f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mbnvb" podUID="2b8de9e1-b570-4daf-8356-82b085f5759f" Aug 13 00:57:54.531705 kernel: audit: type=1325 audit(1755046674.456:284): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:54.456000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc894b7f10 a2=0 a3=7ffc894b7efc items=0 ppid=2359 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:54.578435 kernel: audit: type=1300 audit(1755046674.456:284): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc894b7f10 a2=0 a3=7ffc894b7efc items=0 ppid=2359 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:54.588906 env[1335]: time="2025-08-13T00:57:54.588832613Z" level=error msg="Failed to destroy network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.589686 env[1335]: time="2025-08-13T00:57:54.589621578Z" level=error msg="encountered an error cleaning up failed sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.589938 env[1335]: time="2025-08-13T00:57:54.589876891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpcml,Uid:8da3e8d7-b831-4716-b890-6a89d4b7984d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.590625 kubelet[2240]: E0813 00:57:54.590373 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.590625 kubelet[2240]: E0813 00:57:54.590472 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:54.590625 kubelet[2240]: E0813 00:57:54.590507 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpcml" Aug 13 00:57:54.591297 kubelet[2240]: E0813 00:57:54.590940 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-cpcml_calico-system(8da3e8d7-b831-4716-b890-6a89d4b7984d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-cpcml_calico-system(8da3e8d7-b831-4716-b890-6a89d4b7984d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-cpcml" podUID="8da3e8d7-b831-4716-b890-6a89d4b7984d" Aug 13 00:57:54.612663 kernel: audit: type=1327 audit(1755046674.456:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:54.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:54.612920 env[1335]: time="2025-08-13T00:57:54.605359803Z" level=error msg="Failed to destroy network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.600732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf-shm.mount: Deactivated successfully. Aug 13 00:57:54.613134 env[1335]: time="2025-08-13T00:57:54.613052655Z" level=error msg="encountered an error cleaning up failed sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.614224 env[1335]: time="2025-08-13T00:57:54.613183752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c744b486-wkm2q,Uid:e00a54c4-ec09-455a-86b9-9b9e86402f95,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.614443 kubelet[2240]: E0813 00:57:54.613735 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.614443 kubelet[2240]: E0813 00:57:54.613805 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c744b486-wkm2q" Aug 13 00:57:54.614443 kubelet[2240]: E0813 00:57:54.613836 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c744b486-wkm2q" Aug 13 00:57:54.614705 kubelet[2240]: E0813 00:57:54.613895 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c744b486-wkm2q_calico-system(e00a54c4-ec09-455a-86b9-9b9e86402f95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c744b486-wkm2q_calico-system(e00a54c4-ec09-455a-86b9-9b9e86402f95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c744b486-wkm2q" podUID="e00a54c4-ec09-455a-86b9-9b9e86402f95" Aug 13 00:57:54.634242 env[1335]: time="2025-08-13T00:57:54.634160776Z" level=error msg="Failed to destroy network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.634756 env[1335]: time="2025-08-13T00:57:54.634683638Z" level=error msg="encountered an error cleaning up failed sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.634902 env[1335]: time="2025-08-13T00:57:54.634782738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5654d7f6ff-28dlk,Uid:e4be5576-b47c-4b87-a843-c6ed19671979,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.635160 kubelet[2240]: E0813 00:57:54.635085 2240 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.635286 kubelet[2240]: E0813 00:57:54.635168 2240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5654d7f6ff-28dlk" Aug 13 00:57:54.635286 kubelet[2240]: E0813 00:57:54.635210 2240 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5654d7f6ff-28dlk" Aug 13 00:57:54.635411 kubelet[2240]: E0813 00:57:54.635287 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5654d7f6ff-28dlk_calico-system(e4be5576-b47c-4b87-a843-c6ed19671979)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5654d7f6ff-28dlk_calico-system(e4be5576-b47c-4b87-a843-c6ed19671979)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5654d7f6ff-28dlk" podUID="e4be5576-b47c-4b87-a843-c6ed19671979" Aug 13 00:57:54.702164 env[1335]: time="2025-08-13T00:57:54.702077276Z" level=error msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" failed" error="failed to destroy network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.702463 kubelet[2240]: E0813 00:57:54.702401 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:57:54.702657 kubelet[2240]: E0813 00:57:54.702499 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9"} Aug 13 00:57:54.702657 kubelet[2240]: E0813 00:57:54.702585 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42c2f2da-5a83-4b40-aec1-478c8ca60301\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:54.702891 kubelet[2240]: E0813 00:57:54.702659 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42c2f2da-5a83-4b40-aec1-478c8ca60301\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f5mjv" podUID="42c2f2da-5a83-4b40-aec1-478c8ca60301" Aug 13 00:57:54.728150 env[1335]: time="2025-08-13T00:57:54.728058766Z" level=error msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" failed" error="failed to destroy network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.728461 kubelet[2240]: E0813 00:57:54.728390 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:57:54.728639 kubelet[2240]: E0813 00:57:54.728482 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05"} Aug 13 00:57:54.728639 kubelet[2240]: E0813 00:57:54.728538 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bb39f28-f779-40ce-a3ee-b95479595e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:54.728639 kubelet[2240]: E0813 00:57:54.728585 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bb39f28-f779-40ce-a3ee-b95479595e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" podUID="2bb39f28-f779-40ce-a3ee-b95479595e66" Aug 13 00:57:54.735658 env[1335]: time="2025-08-13T00:57:54.733304793Z" level=error msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" failed" error="failed to destroy network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:54.735870 kubelet[2240]: E0813 00:57:54.733584 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:57:54.735870 kubelet[2240]: E0813 00:57:54.733696 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78"} Aug 13 00:57:54.735870 kubelet[2240]: E0813 00:57:54.733749 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"281890f3-f0b5-4757-b7db-b03ab8faf735\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:54.735870 kubelet[2240]: E0813 00:57:54.733881 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"281890f3-f0b5-4757-b7db-b03ab8faf735\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hv7wr" podUID="281890f3-f0b5-4757-b7db-b03ab8faf735" Aug 13 00:57:55.076795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d-shm.mount: Deactivated successfully. Aug 13 00:57:55.077035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c-shm.mount: Deactivated successfully. Aug 13 00:57:55.406061 kubelet[2240]: I0813 00:57:55.406006 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:57:55.408731 env[1335]: time="2025-08-13T00:57:55.407436796Z" level=info msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" Aug 13 00:57:55.411178 kubelet[2240]: I0813 00:57:55.410824 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:57:55.412167 env[1335]: time="2025-08-13T00:57:55.412124648Z" level=info msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" Aug 13 00:57:55.419882 kubelet[2240]: I0813 00:57:55.416464 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:57:55.420090 env[1335]: time="2025-08-13T00:57:55.417256082Z" level=info msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" Aug 13 00:57:55.422443 kubelet[2240]: I0813 00:57:55.421886 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:57:55.428690 env[1335]: time="2025-08-13T00:57:55.428641528Z" level=info msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" Aug 13 00:57:55.438166 kubelet[2240]: I0813 00:57:55.438119 2240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:57:55.442181 env[1335]: time="2025-08-13T00:57:55.440957428Z" level=info msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" Aug 13 00:57:55.616325 env[1335]: time="2025-08-13T00:57:55.616232679Z" level=error msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" failed" error="failed to destroy network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:55.617083 kubelet[2240]: E0813 00:57:55.616832 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:57:55.617083 kubelet[2240]: E0813 00:57:55.616903 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d"} Aug 13 00:57:55.617083 kubelet[2240]: E0813 00:57:55.616969 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e00a54c4-ec09-455a-86b9-9b9e86402f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:55.617083 kubelet[2240]: E0813 00:57:55.617011 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e00a54c4-ec09-455a-86b9-9b9e86402f95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c744b486-wkm2q" podUID="e00a54c4-ec09-455a-86b9-9b9e86402f95" Aug 13 00:57:55.628076 env[1335]: time="2025-08-13T00:57:55.627992619Z" level=error msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" failed" error="failed to destroy network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:55.628360 kubelet[2240]: E0813 00:57:55.628308 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:57:55.628502 kubelet[2240]: E0813 00:57:55.628382 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf"} Aug 13 00:57:55.628502 kubelet[2240]: E0813 00:57:55.628438 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8da3e8d7-b831-4716-b890-6a89d4b7984d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:55.628709 kubelet[2240]: E0813 00:57:55.628476 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8da3e8d7-b831-4716-b890-6a89d4b7984d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-cpcml" podUID="8da3e8d7-b831-4716-b890-6a89d4b7984d" Aug 13 00:57:55.629766 env[1335]: time="2025-08-13T00:57:55.629697864Z" level=error msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" failed" error="failed to destroy network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:55.629991 kubelet[2240]: E0813 00:57:55.629948 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:57:55.630144 kubelet[2240]: E0813 00:57:55.630004 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7"} Aug 13 00:57:55.630144 kubelet[2240]: E0813 00:57:55.630048 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a1beee6-8c8b-44a5-9d7c-8a8072355c14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:55.630144 kubelet[2240]: E0813 00:57:55.630086 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a1beee6-8c8b-44a5-9d7c-8a8072355c14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" podUID="3a1beee6-8c8b-44a5-9d7c-8a8072355c14" Aug 13 00:57:55.632751 env[1335]: time="2025-08-13T00:57:55.632692265Z" level=error msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" failed" error="failed to destroy network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:55.633172 kubelet[2240]: E0813 00:57:55.633116 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:57:55.633312 kubelet[2240]: E0813 00:57:55.633180 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8"} Aug 13 00:57:55.633312 kubelet[2240]: E0813 00:57:55.633226 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b8de9e1-b570-4daf-8356-82b085f5759f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:55.633312 kubelet[2240]: E0813 00:57:55.633268 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b8de9e1-b570-4daf-8356-82b085f5759f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mbnvb" podUID="2b8de9e1-b570-4daf-8356-82b085f5759f" Aug 13 00:57:55.648564 env[1335]: time="2025-08-13T00:57:55.648473315Z" level=error msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" failed" error="failed to destroy network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:57:55.648889 kubelet[2240]: E0813 00:57:55.648813 2240 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:57:55.649020 kubelet[2240]: E0813 00:57:55.648911 2240 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c"} Aug 13 00:57:55.649020 kubelet[2240]: E0813 00:57:55.648961 2240 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4be5576-b47c-4b87-a843-c6ed19671979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:57:55.649020 kubelet[2240]: E0813 00:57:55.649003 2240 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4be5576-b47c-4b87-a843-c6ed19671979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5654d7f6ff-28dlk" podUID="e4be5576-b47c-4b87-a843-c6ed19671979" Aug 13 00:58:01.016732 kernel: audit: type=1400 audit(1755046680.993:285): avc: denied { associate } for pid=1533 comm="google_accounts" name="#72" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=0 Aug 13 00:58:00.993000 audit[1533]: AVC avc: denied { associate } for pid=1533 comm="google_accounts" name="#72" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=0 Aug 13 00:58:01.018619 google-accounts[1533]: ERROR Exception calling the response handler. [Errno 13] Permission denied: '/var/lib/google'. Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/google_compute_engine/metadata_watcher.py", line 200, in WatchMetadata handler(response) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_daemon.py", line 285, in HandleAccounts self.utils.SetConfiguredUsers(desired_users.keys()) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_utils.py", line 324, in SetConfiguredUsers os.makedirs(self.google_users_dir) File "/usr/lib/python-exec/python3.9/../../../lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/var/lib/google' Aug 13 00:58:00.993000 audit[1533]: SYSCALL arch=c000003e syscall=83 success=no exit=-13 a0=7fa6b8807860 a1=1ff a2=1ff a3=0 items=0 ppid=1479 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=4294967295 comm="google_accounts" exe="/usr/bin/python3.9" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:01.059235 kernel: audit: type=1300 audit(1755046680.993:285): arch=c000003e syscall=83 success=no exit=-13 a0=7fa6b8807860 a1=1ff a2=1ff a3=0 items=0 ppid=1479 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=4294967295 comm="google_accounts" exe="/usr/bin/python3.9" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:00.993000 audit: PROCTITLE proctitle=2F7573722F6C69622F707974686F6E2D657865632F707974686F6E332E392F707974686F6E33002F7573722F62696E2F676F6F676C655F6163636F756E74735F6461656D6F6E Aug 13 00:58:01.078616 kernel: audit: type=1327 audit(1755046680.993:285): proctitle=2F7573722F6C69622F707974686F6E2D657865632F707974686F6E332E392F707974686F6E33002F7573722F62696E2F676F6F676C655F6163636F756E74735F6461656D6F6E Aug 13 00:58:02.116371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165305456.mount: Deactivated successfully. Aug 13 00:58:02.162136 env[1335]: time="2025-08-13T00:58:02.162058956Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:02.166355 env[1335]: time="2025-08-13T00:58:02.166297074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:02.169458 env[1335]: time="2025-08-13T00:58:02.169401735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:02.172025 env[1335]: time="2025-08-13T00:58:02.171963163Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:02.172653 env[1335]: time="2025-08-13T00:58:02.172572043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:58:02.206348 env[1335]: time="2025-08-13T00:58:02.206285303Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:58:02.236061 env[1335]: time="2025-08-13T00:58:02.235967813Z" level=info msg="CreateContainer within sandbox \"80e492e725df1df56613eecb8f6b948029509bee48ae76b9bc100d7b75195e61\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2c5fd4adf73a597c6e1969debd35edef0e8558314e8afa31c3d5dcce8f506517\"" Aug 13 00:58:02.237084 env[1335]: time="2025-08-13T00:58:02.237037974Z" level=info msg="StartContainer for \"2c5fd4adf73a597c6e1969debd35edef0e8558314e8afa31c3d5dcce8f506517\"" Aug 13 00:58:02.329537 env[1335]: time="2025-08-13T00:58:02.327236072Z" level=info msg="StartContainer for \"2c5fd4adf73a597c6e1969debd35edef0e8558314e8afa31c3d5dcce8f506517\" returns successfully" Aug 13 00:58:02.471136 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:58:02.471337 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:58:02.503544 kubelet[2240]: I0813 00:58:02.503423 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hgvg8" podStartSLOduration=1.8027536290000001 podStartE2EDuration="20.503395849s" podCreationTimestamp="2025-08-13 00:57:42 +0000 UTC" firstStartedPulling="2025-08-13 00:57:43.47365574 +0000 UTC m=+23.612162022" lastFinishedPulling="2025-08-13 00:58:02.174297961 +0000 UTC m=+42.312804242" observedRunningTime="2025-08-13 00:58:02.501399387 +0000 UTC m=+42.639905676" watchObservedRunningTime="2025-08-13 00:58:02.503395849 +0000 UTC m=+42.641902139" Aug 13 00:58:02.686643 env[1335]: time="2025-08-13T00:58:02.686153882Z" level=info msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.825 [INFO][3458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.825 [INFO][3458] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" iface="eth0" netns="/var/run/netns/cni-b3a4cad9-bca7-0de9-acc1-ec386aa918e8" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.826 [INFO][3458] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" iface="eth0" netns="/var/run/netns/cni-b3a4cad9-bca7-0de9-acc1-ec386aa918e8" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.826 [INFO][3458] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" iface="eth0" netns="/var/run/netns/cni-b3a4cad9-bca7-0de9-acc1-ec386aa918e8" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.826 [INFO][3458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.826 [INFO][3458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.880 [INFO][3470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.881 [INFO][3470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.881 [INFO][3470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.895 [WARNING][3470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.895 [INFO][3470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.899 [INFO][3470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:02.905795 env[1335]: 2025-08-13 00:58:02.903 [INFO][3458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:02.906677 env[1335]: time="2025-08-13T00:58:02.906612933Z" level=info msg="TearDown network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" successfully" Aug 13 00:58:02.906853 env[1335]: time="2025-08-13T00:58:02.906828031Z" level=info msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" returns successfully" Aug 13 00:58:03.099757 kubelet[2240]: I0813 00:58:03.099698 2240 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-ca-bundle\") pod \"e4be5576-b47c-4b87-a843-c6ed19671979\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " Aug 13 00:58:03.099969 kubelet[2240]: I0813 00:58:03.099778 2240 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-backend-key-pair\") pod \"e4be5576-b47c-4b87-a843-c6ed19671979\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " Aug 13 00:58:03.099969 kubelet[2240]: I0813 00:58:03.099813 2240 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjmt\" (UniqueName: \"kubernetes.io/projected/e4be5576-b47c-4b87-a843-c6ed19671979-kube-api-access-cnjmt\") pod \"e4be5576-b47c-4b87-a843-c6ed19671979\" (UID: \"e4be5576-b47c-4b87-a843-c6ed19671979\") " Aug 13 00:58:03.102072 kubelet[2240]: I0813 00:58:03.100808 2240 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e4be5576-b47c-4b87-a843-c6ed19671979" (UID: "e4be5576-b47c-4b87-a843-c6ed19671979"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:58:03.110690 systemd[1]: run-netns-cni\x2db3a4cad9\x2dbca7\x2d0de9\x2dacc1\x2dec386aa918e8.mount: Deactivated successfully. Aug 13 00:58:03.110978 systemd[1]: var-lib-kubelet-pods-e4be5576\x2db47c\x2d4b87\x2da843\x2dc6ed19671979-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:58:03.111172 systemd[1]: var-lib-kubelet-pods-e4be5576\x2db47c\x2d4b87\x2da843\x2dc6ed19671979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcnjmt.mount: Deactivated successfully. Aug 13 00:58:03.118104 kubelet[2240]: I0813 00:58:03.118054 2240 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e4be5576-b47c-4b87-a843-c6ed19671979" (UID: "e4be5576-b47c-4b87-a843-c6ed19671979"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:58:03.118391 kubelet[2240]: I0813 00:58:03.118318 2240 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4be5576-b47c-4b87-a843-c6ed19671979-kube-api-access-cnjmt" (OuterVolumeSpecName: "kube-api-access-cnjmt") pod "e4be5576-b47c-4b87-a843-c6ed19671979" (UID: "e4be5576-b47c-4b87-a843-c6ed19671979"). InnerVolumeSpecName "kube-api-access-cnjmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:58:03.200659 kubelet[2240]: I0813 00:58:03.200468 2240 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-ca-bundle\") on node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:58:03.200659 kubelet[2240]: I0813 00:58:03.200519 2240 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4be5576-b47c-4b87-a843-c6ed19671979-whisker-backend-key-pair\") on node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:58:03.200659 kubelet[2240]: I0813 00:58:03.200540 2240 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjmt\" (UniqueName: \"kubernetes.io/projected/e4be5576-b47c-4b87-a843-c6ed19671979-kube-api-access-cnjmt\") on node \"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:58:03.523294 systemd[1]: run-containerd-runc-k8s.io-2c5fd4adf73a597c6e1969debd35edef0e8558314e8afa31c3d5dcce8f506517-runc.OjaW7I.mount: Deactivated successfully. Aug 13 00:58:03.704853 kubelet[2240]: I0813 00:58:03.704775 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b186378a-f125-4702-b44d-3df79cf33330-whisker-backend-key-pair\") pod \"whisker-7bd8966bb5-lxg8k\" (UID: \"b186378a-f125-4702-b44d-3df79cf33330\") " pod="calico-system/whisker-7bd8966bb5-lxg8k" Aug 13 00:58:03.705885 kubelet[2240]: I0813 00:58:03.705842 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b186378a-f125-4702-b44d-3df79cf33330-whisker-ca-bundle\") pod \"whisker-7bd8966bb5-lxg8k\" (UID: \"b186378a-f125-4702-b44d-3df79cf33330\") " pod="calico-system/whisker-7bd8966bb5-lxg8k" Aug 13 00:58:03.706105 kubelet[2240]: I0813 00:58:03.706077 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wflg\" (UniqueName: \"kubernetes.io/projected/b186378a-f125-4702-b44d-3df79cf33330-kube-api-access-7wflg\") pod \"whisker-7bd8966bb5-lxg8k\" (UID: \"b186378a-f125-4702-b44d-3df79cf33330\") " pod="calico-system/whisker-7bd8966bb5-lxg8k" Aug 13 00:58:03.904863 env[1335]: time="2025-08-13T00:58:03.904768679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd8966bb5-lxg8k,Uid:b186378a-f125-4702-b44d-3df79cf33330,Namespace:calico-system,Attempt:0,}" Aug 13 00:58:04.093988 systemd-networkd[1087]: cali861f266b5a6: Link UP Aug 13 00:58:04.103035 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:58:04.130637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali861f266b5a6: link becomes ready Aug 13 00:58:04.125639 systemd-networkd[1087]: cali861f266b5a6: Gained carrier Aug 13 00:58:04.154606 kubelet[2240]: I0813 00:58:04.153309 2240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4be5576-b47c-4b87-a843-c6ed19671979" path="/var/lib/kubelet/pods/e4be5576-b47c-4b87-a843-c6ed19671979/volumes" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:03.958 [INFO][3512] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:03.976 [INFO][3512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0 whisker-7bd8966bb5- calico-system b186378a-f125-4702-b44d-3df79cf33330 887 0 2025-08-13 00:58:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bd8966bb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal whisker-7bd8966bb5-lxg8k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali861f266b5a6 [] [] }} ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:03.976 [INFO][3512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.021 [INFO][3525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" HandleID="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.021 [INFO][3525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" HandleID="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"whisker-7bd8966bb5-lxg8k", "timestamp":"2025-08-13 00:58:04.021173489 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.021 [INFO][3525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.021 [INFO][3525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.021 [INFO][3525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.033 [INFO][3525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.041 [INFO][3525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.047 [INFO][3525] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.050 [INFO][3525] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.054 [INFO][3525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.054 [INFO][3525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.056 [INFO][3525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535 Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.064 [INFO][3525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.074 [INFO][3525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.075 [INFO][3525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.075 [INFO][3525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:04.166813 env[1335]: 2025-08-13 00:58:04.075 [INFO][3525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" HandleID="k8s-pod-network.5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.078 [INFO][3512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0", GenerateName:"whisker-7bd8966bb5-", Namespace:"calico-system", SelfLink:"", UID:"b186378a-f125-4702-b44d-3df79cf33330", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd8966bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-7bd8966bb5-lxg8k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861f266b5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.078 [INFO][3512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.1/32] ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.078 [INFO][3512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali861f266b5a6 ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.138 [INFO][3512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.141 [INFO][3512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0", GenerateName:"whisker-7bd8966bb5-", Namespace:"calico-system", SelfLink:"", UID:"b186378a-f125-4702-b44d-3df79cf33330", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd8966bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535", Pod:"whisker-7bd8966bb5-lxg8k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861f266b5a6", MAC:"22:6c:ee:cf:44:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:04.168141 env[1335]: 2025-08-13 00:58:04.159 [INFO][3512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535" Namespace="calico-system" Pod="whisker-7bd8966bb5-lxg8k" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--7bd8966bb5--lxg8k-eth0" Aug 13 00:58:04.190196 env[1335]: time="2025-08-13T00:58:04.190010022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:04.190196 env[1335]: time="2025-08-13T00:58:04.190135424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:04.190861 env[1335]: time="2025-08-13T00:58:04.190798254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:04.191713 env[1335]: time="2025-08-13T00:58:04.191573956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535 pid=3574 runtime=io.containerd.runc.v2 Aug 13 00:58:04.277000 audit[3615]: AVC avc: denied { write } for pid=3615 comm="tee" name="fd" dev="proc" ino=24443 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.303720 kernel: audit: type=1400 audit(1755046684.277:286): avc: denied { write } for pid=3615 comm="tee" name="fd" dev="proc" ino=24443 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.336676 kernel: audit: type=1300 audit(1755046684.277:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8dba2794 a2=241 a3=1b6 items=1 ppid=3541 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.277000 audit[3615]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8dba2794 a2=241 a3=1b6 items=1 ppid=3541 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.277000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:58:04.370181 kernel: audit: type=1307 audit(1755046684.277:286): cwd="/etc/service/enabled/bird/log" Aug 13 00:58:04.370332 kernel: audit: type=1302 audit(1755046684.277:286): item=0 name="/dev/fd/63" inode=25152 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.277000 audit: PATH item=0 name="/dev/fd/63" inode=25152 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.390634 kernel: audit: type=1327 audit(1755046684.277:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.391000 audit[3625]: AVC avc: denied { write } for pid=3625 comm="tee" name="fd" dev="proc" ino=25161 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.423614 kernel: audit: type=1400 audit(1755046684.391:287): avc: denied { write } for pid=3625 comm="tee" name="fd" dev="proc" ino=25161 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.391000 audit[3625]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2504d783 a2=241 a3=1b6 items=1 ppid=3552 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.482634 kernel: audit: type=1300 audit(1755046684.391:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2504d783 a2=241 a3=1b6 items=1 ppid=3552 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.490388 env[1335]: time="2025-08-13T00:58:04.490309820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd8966bb5-lxg8k,Uid:b186378a-f125-4702-b44d-3df79cf33330,Namespace:calico-system,Attempt:0,} returns sandbox id \"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535\"" Aug 13 00:58:04.493374 env[1335]: time="2025-08-13T00:58:04.493316643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:58:04.391000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:58:04.391000 audit: PATH item=0 name="/dev/fd/63" inode=24453 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.399000 audit[3638]: AVC avc: denied { write } for pid=3638 comm="tee" name="fd" dev="proc" ino=24506 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.399000 audit[3638]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcec220784 a2=241 a3=1b6 items=1 ppid=3549 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.399000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:58:04.399000 audit: PATH item=0 name="/dev/fd/63" inode=24502 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.399000 audit[3629]: AVC avc: denied { write } for pid=3629 comm="tee" name="fd" dev="proc" ino=24510 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.399000 audit[3629]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1d582793 a2=241 a3=1b6 items=1 ppid=3554 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.399000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:58:04.399000 audit: PATH item=0 name="/dev/fd/63" inode=24488 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.433000 audit[3636]: AVC avc: denied { write } for pid=3636 comm="tee" name="fd" dev="proc" ino=25167 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.436000 audit[3634]: AVC avc: denied { write } for pid=3634 comm="tee" name="fd" dev="proc" ino=25170 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.436000 audit[3634]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc57343793 a2=241 a3=1b6 items=1 ppid=3551 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.436000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:58:04.436000 audit: PATH item=0 name="/dev/fd/63" inode=24498 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.436000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.446000 audit[3632]: AVC avc: denied { write } for pid=3632 comm="tee" name="fd" dev="proc" ino=25175 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:58:04.446000 audit[3632]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc79e1a795 a2=241 a3=1b6 items=1 ppid=3545 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.446000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:58:04.446000 audit: PATH item=0 name="/dev/fd/63" inode=24497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.433000 audit[3636]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3f99e793 a2=241 a3=1b6 items=1 ppid=3547 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.433000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:58:04.433000 audit: PATH item=0 name="/dev/fd/63" inode=24499 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:58:04.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.965000 audit: BPF prog-id=10 op=LOAD Aug 13 00:58:04.965000 audit[3699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3f81ac60 a2=98 a3=1fffffffffffffff items=0 ppid=3555 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.965000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:58:04.967000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.967000 audit: BPF prog-id=11 op=LOAD Aug 13 00:58:04.967000 audit[3699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3f81ab40 a2=94 a3=3 items=0 ppid=3555 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.967000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:58:04.968000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit[3699]: AVC avc: denied { bpf } for pid=3699 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.968000 audit: BPF prog-id=12 op=LOAD Aug 13 00:58:04.968000 audit[3699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3f81ab80 a2=94 a3=7ffc3f81ad60 items=0 ppid=3555 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.968000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:58:04.969000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:58:04.969000 audit[3699]: AVC avc: denied { perfmon } for pid=3699 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.969000 audit[3699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc3f81ac50 a2=50 a3=a000000085 items=0 ppid=3555 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.969000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.972000 audit: BPF prog-id=13 op=LOAD Aug 13 00:58:04.972000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd8fec440 a2=98 a3=3 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.972000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:04.973000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit: BPF prog-id=14 op=LOAD Aug 13 00:58:04.973000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd8fec230 a2=94 a3=54428f items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.973000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:04.973000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:04.973000 audit: BPF prog-id=15 op=LOAD Aug 13 00:58:04.973000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd8fec260 a2=94 a3=2 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:04.973000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:04.974000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:58:05.119737 env[1335]: time="2025-08-13T00:58:05.119556632Z" level=info msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.123000 audit: BPF prog-id=16 op=LOAD Aug 13 00:58:05.123000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd8fec120 a2=94 a3=1 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.123000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.125000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:58:05.125000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.125000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdd8fec1f0 a2=50 a3=7ffdd8fec2d0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.143000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.143000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec130 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.143000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd8fec160 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd8fec070 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec180 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec160 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec150 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec180 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd8fec160 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd8fec180 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd8fec150 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd8fec1c0 a2=28 a3=0 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdd8febf70 a2=50 a3=1 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.144000 audit: BPF prog-id=17 op=LOAD Aug 13 00:58:05.144000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd8febf70 a2=94 a3=5 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.144000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.145000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdd8fec020 a2=50 a3=1 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdd8fec140 a2=4 a3=38 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { confidentiality } for pid=3700 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.145000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd8fec190 a2=94 a3=6 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.145000 audit[3700]: AVC avc: denied { confidentiality } for pid=3700 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.145000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd8feb940 a2=94 a3=88 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.146000 audit[3700]: AVC avc: denied { confidentiality } for pid=3700 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.146000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd8feb940 a2=94 a3=88 items=0 ppid=3555 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.165000 audit: BPF prog-id=18 op=LOAD Aug 13 00:58:05.165000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe1974d90 a2=98 a3=1999999999999999 items=0 ppid=3555 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.165000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:58:05.166000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit: BPF prog-id=19 op=LOAD Aug 13 00:58:05.166000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe1974c70 a2=94 a3=ffff items=0 ppid=3555 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.166000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:58:05.166000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.166000 audit: BPF prog-id=20 op=LOAD Aug 13 00:58:05.166000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe1974cb0 a2=94 a3=7fffe1974e90 items=0 ppid=3555 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.166000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:58:05.166000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:58:05.326565 systemd-networkd[1087]: vxlan.calico: Link UP Aug 13 00:58:05.326585 systemd-networkd[1087]: vxlan.calico: Gained carrier Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.237 [INFO][3710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.237 [INFO][3710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" iface="eth0" netns="/var/run/netns/cni-3774049e-1b6e-0072-102e-b15ccb9c098d" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.238 [INFO][3710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" iface="eth0" netns="/var/run/netns/cni-3774049e-1b6e-0072-102e-b15ccb9c098d" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.242 [INFO][3710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" iface="eth0" netns="/var/run/netns/cni-3774049e-1b6e-0072-102e-b15ccb9c098d" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.246 [INFO][3710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.246 [INFO][3710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.302 [INFO][3732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.302 [INFO][3732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.302 [INFO][3732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.314 [WARNING][3732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.314 [INFO][3732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.319 [INFO][3732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:05.330918 env[1335]: 2025-08-13 00:58:05.320 [INFO][3710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:05.342573 systemd[1]: run-netns-cni\x2d3774049e\x2d1b6e\x2d0072\x2d102e\x2db15ccb9c098d.mount: Deactivated successfully. Aug 13 00:58:05.347581 env[1335]: time="2025-08-13T00:58:05.345043639Z" level=info msg="TearDown network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" successfully" Aug 13 00:58:05.347581 env[1335]: time="2025-08-13T00:58:05.345098404Z" level=info msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" returns successfully" Aug 13 00:58:05.355536 env[1335]: time="2025-08-13T00:58:05.355484860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-5tbcd,Uid:2bb39f28-f779-40ce-a3ee-b95479595e66,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:58:05.428919 systemd-networkd[1087]: cali861f266b5a6: Gained IPv6LL Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit: BPF prog-id=21 op=LOAD Aug 13 00:58:05.510000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40c433e0 a2=98 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.510000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.510000 audit: BPF prog-id=22 op=LOAD Aug 13 00:58:05.510000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40c431f0 a2=94 a3=54428f items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit: BPF prog-id=23 op=LOAD Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40c43220 a2=94 a3=2 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c430f0 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd40c43120 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd40c43030 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c43140 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c43120 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c43110 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c43140 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd40c43120 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd40c43140 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd40c43110 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd40c43180 a2=28 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.511000 audit: BPF prog-id=24 op=LOAD Aug 13 00:58:05.511000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40c42ff0 a2=94 a3=0 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.512000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd40c42fe0 a2=50 a3=2800 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.517000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd40c42fe0 a2=50 a3=2800 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.517000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit: BPF prog-id=25 op=LOAD Aug 13 00:58:05.517000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40c42800 a2=94 a3=2 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.517000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.517000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { perfmon } for pid=3762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit[3762]: AVC avc: denied { bpf } for pid=3762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.517000 audit: BPF prog-id=26 op=LOAD Aug 13 00:58:05.517000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40c42900 a2=94 a3=30 items=0 ppid=3555 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.517000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit: BPF prog-id=27 op=LOAD Aug 13 00:58:05.523000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc571b5e0 a2=98 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.523000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit: BPF prog-id=28 op=LOAD Aug 13 00:58:05.523000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc571b3d0 a2=94 a3=54428f items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.523000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.523000 audit: BPF prog-id=29 op=LOAD Aug 13 00:58:05.523000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc571b400 a2=94 a3=2 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.523000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:58:05.789898 systemd-networkd[1087]: calie3c5f93b933: Link UP Aug 13 00:58:05.802138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie3c5f93b933: link becomes ready Aug 13 00:58:05.803846 systemd-networkd[1087]: calie3c5f93b933: Gained carrier Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit: BPF prog-id=30 op=LOAD Aug 13 00:58:05.832000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc571b2c0 a2=94 a3=1 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.832000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.832000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:58:05.832000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.832000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcc571b390 a2=50 a3=7ffcc571b470 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.832000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.546 [INFO][3745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0 calico-apiserver-5d7cc8c448- calico-apiserver 2bb39f28-f779-40ce-a3ee-b95479595e66 899 0 2025-08-13 00:57:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7cc8c448 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal calico-apiserver-5d7cc8c448-5tbcd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3c5f93b933 [] [] }} ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.547 [INFO][3745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.684 [INFO][3770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" HandleID="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.684 [INFO][3770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" HandleID="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004379e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"calico-apiserver-5d7cc8c448-5tbcd", "timestamp":"2025-08-13 00:58:05.68176774 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.684 [INFO][3770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.685 [INFO][3770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.685 [INFO][3770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.695 [INFO][3770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.702 [INFO][3770] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.708 [INFO][3770] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.749 [INFO][3770] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.757 [INFO][3770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.757 [INFO][3770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.759 [INFO][3770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559 Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.769 [INFO][3770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.778 [INFO][3770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.778 [INFO][3770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.778 [INFO][3770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:05.858281 env[1335]: 2025-08-13 00:58:05.778 [INFO][3770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" HandleID="k8s-pod-network.f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.782 [INFO][3745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bb39f28-f779-40ce-a3ee-b95479595e66", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5d7cc8c448-5tbcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3c5f93b933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.782 [INFO][3745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.2/32] ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.782 [INFO][3745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3c5f93b933 ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.811 [INFO][3745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.816 [INFO][3745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bb39f28-f779-40ce-a3ee-b95479595e66", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559", Pod:"calico-apiserver-5d7cc8c448-5tbcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3c5f93b933", MAC:"2a:f2:71:37:a7:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:05.859757 env[1335]: 2025-08-13 00:58:05.855 [INFO][3745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-5tbcd" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b2d0 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc571b300 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc571b210 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b320 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b300 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b2f0 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b320 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc571b300 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc571b320 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc571b2f0 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.869000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.869000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc571b360 a2=28 a3=0 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcc571b110 a2=50 a3=1 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit: BPF prog-id=31 op=LOAD Aug 13 00:58:05.870000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc571b110 a2=94 a3=5 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.870000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcc571b1c0 a2=50 a3=1 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcc571b2e0 a2=4 a3=38 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.870000 audit[3765]: AVC avc: denied { confidentiality } for pid=3765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.870000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc571b330 a2=94 a3=6 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { confidentiality } for pid=3765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.871000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc571aae0 a2=94 a3=88 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { perfmon } for pid=3765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { confidentiality } for pid=3765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:58:05.871000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc571aae0 a2=94 a3=88 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.871000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.871000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcc571c510 a2=10 a3=f8f00800 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.872000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.872000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcc571c3b0 a2=10 a3=3 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.872000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.872000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcc571c350 a2=10 a3=3 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.872000 audit[3765]: AVC avc: denied { bpf } for pid=3765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:58:05.872000 audit[3765]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcc571c350 a2=10 a3=7 items=0 ppid=3555 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:05.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:58:05.883000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:58:05.914974 env[1335]: time="2025-08-13T00:58:05.914425139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:05.915357 env[1335]: time="2025-08-13T00:58:05.915308190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:05.915569 env[1335]: time="2025-08-13T00:58:05.915526923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:05.916269 env[1335]: time="2025-08-13T00:58:05.916209609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559 pid=3794 runtime=io.containerd.runc.v2 Aug 13 00:58:06.045427 env[1335]: time="2025-08-13T00:58:06.045274825Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:06.061773 env[1335]: time="2025-08-13T00:58:06.061713082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:06.080065 env[1335]: time="2025-08-13T00:58:06.080009128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:06.093648 env[1335]: time="2025-08-13T00:58:06.093527680Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:06.095329 env[1335]: time="2025-08-13T00:58:06.095269610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:58:06.110384 kernel: kauditd_printk_skb: 538 callbacks suppressed Aug 13 00:58:06.110564 kernel: audit: type=1325 audit(1755046686.098:391): table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.098000 audit[3847]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.112522 env[1335]: time="2025-08-13T00:58:06.112456418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-5tbcd,Uid:2bb39f28-f779-40ce-a3ee-b95479595e66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559\"" Aug 13 00:58:06.098000 audit[3847]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd13a02b60 a2=0 a3=7ffd13a02b4c items=0 ppid=3555 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.167176 env[1335]: time="2025-08-13T00:58:06.134146365Z" level=info msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" Aug 13 00:58:06.167176 env[1335]: time="2025-08-13T00:58:06.138915302Z" level=info msg="CreateContainer within sandbox \"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:58:06.167176 env[1335]: time="2025-08-13T00:58:06.142118593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:58:06.167880 kernel: audit: type=1300 audit(1755046686.098:391): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd13a02b60 a2=0 a3=7ffd13a02b4c items=0 ppid=3555 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.098000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.202673 kernel: audit: type=1327 audit(1755046686.098:391): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.202849 kernel: audit: type=1325 audit(1755046686.173:392): table=nat:102 family=2 entries=15 op=nft_register_chain pid=3840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.173000 audit[3840]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.173000 audit[3840]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf27ba810 a2=0 a3=7ffdf27ba7fc items=0 ppid=3555 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.276239 kernel: audit: type=1300 audit(1755046686.173:392): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf27ba810 a2=0 a3=7ffdf27ba7fc items=0 ppid=3555 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.276395 kernel: audit: type=1327 audit(1755046686.173:392): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.173000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.298763 kernel: audit: type=1325 audit(1755046686.277:393): table=raw:103 family=2 entries=21 op=nft_register_chain pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.277000 audit[3839]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.277000 audit[3839]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff2d44c130 a2=0 a3=7fff2d44c11c items=0 ppid=3555 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.333628 kernel: audit: type=1300 audit(1755046686.277:393): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff2d44c130 a2=0 a3=7fff2d44c11c items=0 ppid=3555 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.357639 kernel: audit: type=1327 audit(1755046686.277:393): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.376008 env[1335]: time="2025-08-13T00:58:06.375927134Z" level=info msg="CreateContainer within sandbox \"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f3385d3884283eae4c8d02795266ce37ac3500ae7d62bf709e0eba1c290822fd\"" Aug 13 00:58:06.378479 env[1335]: time="2025-08-13T00:58:06.378428575Z" level=info msg="StartContainer for \"f3385d3884283eae4c8d02795266ce37ac3500ae7d62bf709e0eba1c290822fd\"" Aug 13 00:58:06.405788 kernel: audit: type=1325 audit(1755046686.296:394): table=filter:104 family=2 entries=94 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.296000 audit[3846]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.296000 audit[3846]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffde5f007b0 a2=0 a3=55f74a3c1000 items=0 ppid=3555 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.492000 audit[3905]: NETFILTER_CFG table=filter:105 family=2 entries=50 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.492000 audit[3905]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc1464dcf0 a2=0 a3=7ffc1464dcdc items=0 ppid=3555 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.492000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.437 [INFO][3868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.437 [INFO][3868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" iface="eth0" netns="/var/run/netns/cni-0e15d9e7-13c1-1826-d853-b164ba33a17c" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.438 [INFO][3868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" iface="eth0" netns="/var/run/netns/cni-0e15d9e7-13c1-1826-d853-b164ba33a17c" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.438 [INFO][3868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" iface="eth0" netns="/var/run/netns/cni-0e15d9e7-13c1-1826-d853-b164ba33a17c" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.438 [INFO][3868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.438 [INFO][3868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.572 [INFO][3890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.574 [INFO][3890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.574 [INFO][3890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.585 [WARNING][3890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.585 [INFO][3890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.588 [INFO][3890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:06.593026 env[1335]: 2025-08-13 00:58:06.590 [INFO][3868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:06.598880 env[1335]: time="2025-08-13T00:58:06.598803539Z" level=info msg="TearDown network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" successfully" Aug 13 00:58:06.599136 env[1335]: time="2025-08-13T00:58:06.599101596Z" level=info msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" returns successfully" Aug 13 00:58:06.600496 systemd[1]: run-netns-cni\x2d0e15d9e7\x2d13c1\x2d1826\x2dd853\x2db164ba33a17c.mount: Deactivated successfully. Aug 13 00:58:06.603971 env[1335]: time="2025-08-13T00:58:06.603919766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-zvxvr,Uid:3a1beee6-8c8b-44a5-9d7c-8a8072355c14,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:58:06.651755 env[1335]: time="2025-08-13T00:58:06.650480112Z" level=info msg="StartContainer for \"f3385d3884283eae4c8d02795266ce37ac3500ae7d62bf709e0eba1c290822fd\" returns successfully" Aug 13 00:58:06.829161 systemd-networkd[1087]: cali3c8bdae2843: Link UP Aug 13 00:58:06.837782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:58:06.846623 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3c8bdae2843: link becomes ready Aug 13 00:58:06.853931 systemd-networkd[1087]: cali3c8bdae2843: Gained carrier Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.713 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0 calico-apiserver-5d7cc8c448- calico-apiserver 3a1beee6-8c8b-44a5-9d7c-8a8072355c14 907 0 2025-08-13 00:57:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7cc8c448 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal calico-apiserver-5d7cc8c448-zvxvr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c8bdae2843 [] [] }} ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.713 [INFO][3926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.761 [INFO][3941] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" HandleID="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.762 [INFO][3941] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" HandleID="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"calico-apiserver-5d7cc8c448-zvxvr", "timestamp":"2025-08-13 00:58:06.761837908 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.762 [INFO][3941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.762 [INFO][3941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.762 [INFO][3941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.779 [INFO][3941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.786 [INFO][3941] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.796 [INFO][3941] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.799 [INFO][3941] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.802 [INFO][3941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.802 [INFO][3941] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.805 [INFO][3941] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250 Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.811 [INFO][3941] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.821 [INFO][3941] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.821 [INFO][3941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.822 [INFO][3941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:06.875298 env[1335]: 2025-08-13 00:58:06.822 [INFO][3941] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" HandleID="k8s-pod-network.9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.824 [INFO][3926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a1beee6-8c8b-44a5-9d7c-8a8072355c14", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5d7cc8c448-zvxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c8bdae2843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.824 [INFO][3926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.3/32] ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.824 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c8bdae2843 ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.853 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.854 [INFO][3926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a1beee6-8c8b-44a5-9d7c-8a8072355c14", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250", Pod:"calico-apiserver-5d7cc8c448-zvxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c8bdae2843", MAC:"5a:39:24:fa:37:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:06.876509 env[1335]: 2025-08-13 00:58:06.872 [INFO][3926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250" Namespace="calico-apiserver" Pod="calico-apiserver-5d7cc8c448-zvxvr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:06.897000 audit[3955]: NETFILTER_CFG table=filter:106 family=2 entries=41 op=nft_register_chain pid=3955 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:06.897000 audit[3955]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7fffed437a00 a2=0 a3=7fffed4379ec items=0 ppid=3555 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:06.897000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:06.904140 env[1335]: time="2025-08-13T00:58:06.904017438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:06.904140 env[1335]: time="2025-08-13T00:58:06.904090072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:06.904538 env[1335]: time="2025-08-13T00:58:06.904119974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:06.904923 env[1335]: time="2025-08-13T00:58:06.904866651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250 pid=3962 runtime=io.containerd.runc.v2 Aug 13 00:58:06.994234 env[1335]: time="2025-08-13T00:58:06.994140221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7cc8c448-zvxvr,Uid:3a1beee6-8c8b-44a5-9d7c-8a8072355c14,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250\"" Aug 13 00:58:07.122821 env[1335]: time="2025-08-13T00:58:07.122636046Z" level=info msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.248 [INFO][4005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.248 [INFO][4005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" iface="eth0" netns="/var/run/netns/cni-340c01e9-872d-b630-e31b-833f7102883d" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.249 [INFO][4005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" iface="eth0" netns="/var/run/netns/cni-340c01e9-872d-b630-e31b-833f7102883d" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.249 [INFO][4005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" iface="eth0" netns="/var/run/netns/cni-340c01e9-872d-b630-e31b-833f7102883d" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.249 [INFO][4005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.249 [INFO][4005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.311 [INFO][4012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.313 [INFO][4012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.313 [INFO][4012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.327 [WARNING][4012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.327 [INFO][4012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.330 [INFO][4012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:07.345860 env[1335]: 2025-08-13 00:58:07.333 [INFO][4005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:07.348232 env[1335]: time="2025-08-13T00:58:07.348160972Z" level=info msg="TearDown network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" successfully" Aug 13 00:58:07.348622 env[1335]: time="2025-08-13T00:58:07.348527442Z" level=info msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" returns successfully" Aug 13 00:58:07.350380 systemd-networkd[1087]: vxlan.calico: Gained IPv6LL Aug 13 00:58:07.364705 env[1335]: time="2025-08-13T00:58:07.358681129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hv7wr,Uid:281890f3-f0b5-4757-b7db-b03ab8faf735,Namespace:calico-system,Attempt:1,}" Aug 13 00:58:07.364132 systemd[1]: run-netns-cni\x2d340c01e9\x2d872d\x2db630\x2de31b\x2d833f7102883d.mount: Deactivated successfully. Aug 13 00:58:07.607346 systemd-networkd[1087]: calie3c5f93b933: Gained IPv6LL Aug 13 00:58:07.635016 systemd-networkd[1087]: calie141a1ff581: Link UP Aug 13 00:58:07.648642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie141a1ff581: link becomes ready Aug 13 00:58:07.650696 systemd-networkd[1087]: calie141a1ff581: Gained carrier Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.496 [INFO][4018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0 csi-node-driver- calico-system 281890f3-f0b5-4757-b7db-b03ab8faf735 918 0 2025-08-13 00:57:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal csi-node-driver-hv7wr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie141a1ff581 [] [] }} ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.497 [INFO][4018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.552 [INFO][4031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" HandleID="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.553 [INFO][4031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" HandleID="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"csi-node-driver-hv7wr", "timestamp":"2025-08-13 00:58:07.552044588 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.553 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.553 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.553 [INFO][4031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.567 [INFO][4031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.576 [INFO][4031] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.585 [INFO][4031] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.590 [INFO][4031] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.596 [INFO][4031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.596 [INFO][4031] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.599 [INFO][4031] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932 Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.610 [INFO][4031] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.624 [INFO][4031] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.624 [INFO][4031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.624 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:07.700751 env[1335]: 2025-08-13 00:58:07.624 [INFO][4031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" HandleID="k8s-pod-network.7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.630 [INFO][4018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"281890f3-f0b5-4757-b7db-b03ab8faf735", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-hv7wr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie141a1ff581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.630 [INFO][4018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.4/32] ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.630 [INFO][4018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie141a1ff581 ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.653 [INFO][4018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.662 [INFO][4018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"281890f3-f0b5-4757-b7db-b03ab8faf735", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932", Pod:"csi-node-driver-hv7wr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie141a1ff581", MAC:"ba:d8:6b:62:5c:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:07.702216 env[1335]: 2025-08-13 00:58:07.694 [INFO][4018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932" Namespace="calico-system" Pod="csi-node-driver-hv7wr" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:07.755937 env[1335]: time="2025-08-13T00:58:07.755822165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:07.756328 env[1335]: time="2025-08-13T00:58:07.756281642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:07.756495 env[1335]: time="2025-08-13T00:58:07.756460170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:07.757032 env[1335]: time="2025-08-13T00:58:07.756980314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932 pid=4052 runtime=io.containerd.runc.v2 Aug 13 00:58:07.763000 audit[4056]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:07.763000 audit[4056]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7fff7d49d300 a2=0 a3=7fff7d49d2ec items=0 ppid=3555 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:07.763000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:07.888089 env[1335]: time="2025-08-13T00:58:07.888013451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hv7wr,Uid:281890f3-f0b5-4757-b7db-b03ab8faf735,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932\"" Aug 13 00:58:08.124520 env[1335]: time="2025-08-13T00:58:08.124182356Z" level=info msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" Aug 13 00:58:08.437793 systemd-networkd[1087]: cali3c8bdae2843: Gained IPv6LL Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.371 [INFO][4097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.371 [INFO][4097] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" iface="eth0" netns="/var/run/netns/cni-4858c5cf-b779-4824-9e59-1af42e7827fe" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.372 [INFO][4097] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" iface="eth0" netns="/var/run/netns/cni-4858c5cf-b779-4824-9e59-1af42e7827fe" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.372 [INFO][4097] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" iface="eth0" netns="/var/run/netns/cni-4858c5cf-b779-4824-9e59-1af42e7827fe" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.372 [INFO][4097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.372 [INFO][4097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.466 [INFO][4105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.474 [INFO][4105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.474 [INFO][4105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.487 [WARNING][4105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.487 [INFO][4105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.489 [INFO][4105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:08.492908 env[1335]: 2025-08-13 00:58:08.491 [INFO][4097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:08.499168 systemd[1]: run-netns-cni\x2d4858c5cf\x2db779\x2d4824\x2d9e59\x2d1af42e7827fe.mount: Deactivated successfully. Aug 13 00:58:08.502327 env[1335]: time="2025-08-13T00:58:08.502203728Z" level=info msg="TearDown network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" successfully" Aug 13 00:58:08.502510 env[1335]: time="2025-08-13T00:58:08.502333620Z" level=info msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" returns successfully" Aug 13 00:58:08.503780 env[1335]: time="2025-08-13T00:58:08.503736056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5mjv,Uid:42c2f2da-5a83-4b40-aec1-478c8ca60301,Namespace:kube-system,Attempt:1,}" Aug 13 00:58:08.779496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:58:08.779648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf9fdfde503: link becomes ready Aug 13 00:58:08.763689 systemd-networkd[1087]: califf9fdfde503: Link UP Aug 13 00:58:08.782185 systemd-networkd[1087]: califf9fdfde503: Gained carrier Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.620 [INFO][4111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0 coredns-7c65d6cfc9- kube-system 42c2f2da-5a83-4b40-aec1-478c8ca60301 926 0 2025-08-13 00:57:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal coredns-7c65d6cfc9-f5mjv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf9fdfde503 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.620 [INFO][4111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.673 [INFO][4125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" HandleID="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.673 [INFO][4125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" HandleID="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b74a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-f5mjv", "timestamp":"2025-08-13 00:58:08.672977678 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.673 [INFO][4125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.673 [INFO][4125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.673 [INFO][4125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.684 [INFO][4125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.691 [INFO][4125] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.697 [INFO][4125] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.700 [INFO][4125] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.704 [INFO][4125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.704 [INFO][4125] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.706 [INFO][4125] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831 Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.727 [INFO][4125] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.744 [INFO][4125] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.744 [INFO][4125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.744 [INFO][4125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:08.816580 env[1335]: 2025-08-13 00:58:08.744 [INFO][4125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" HandleID="k8s-pod-network.aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.748 [INFO][4111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42c2f2da-5a83-4b40-aec1-478c8ca60301", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-f5mjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf9fdfde503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.748 [INFO][4111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.5/32] ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.748 [INFO][4111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf9fdfde503 ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.790 [INFO][4111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.791 [INFO][4111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42c2f2da-5a83-4b40-aec1-478c8ca60301", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831", Pod:"coredns-7c65d6cfc9-f5mjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf9fdfde503", MAC:"a6:3e:09:a0:8a:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:08.817843 env[1335]: 2025-08-13 00:58:08.809 [INFO][4111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f5mjv" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:08.847000 audit[4141]: NETFILTER_CFG table=filter:108 family=2 entries=50 op=nft_register_chain pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:08.847000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7ffd5ee9e350 a2=0 a3=7ffd5ee9e33c items=0 ppid=3555 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:08.847000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:08.875387 env[1335]: time="2025-08-13T00:58:08.875099854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:08.875387 env[1335]: time="2025-08-13T00:58:08.875162599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:08.875387 env[1335]: time="2025-08-13T00:58:08.875184477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:08.875886 env[1335]: time="2025-08-13T00:58:08.875808509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831 pid=4148 runtime=io.containerd.runc.v2 Aug 13 00:58:09.060418 env[1335]: time="2025-08-13T00:58:09.058582122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5mjv,Uid:42c2f2da-5a83-4b40-aec1-478c8ca60301,Namespace:kube-system,Attempt:1,} returns sandbox id \"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831\"" Aug 13 00:58:09.076047 env[1335]: time="2025-08-13T00:58:09.075991570Z" level=info msg="CreateContainer within sandbox \"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:58:09.117468 env[1335]: time="2025-08-13T00:58:09.116752672Z" level=info msg="CreateContainer within sandbox \"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a45cdafa2010727349ebeb6c1dbb6834b7c85685facdfa32544f8b49453712b8\"" Aug 13 00:58:09.123677 env[1335]: time="2025-08-13T00:58:09.118462922Z" level=info msg="StartContainer for \"a45cdafa2010727349ebeb6c1dbb6834b7c85685facdfa32544f8b49453712b8\"" Aug 13 00:58:09.141583 systemd-networkd[1087]: calie141a1ff581: Gained IPv6LL Aug 13 00:58:09.240301 env[1335]: time="2025-08-13T00:58:09.240232795Z" level=info msg="StartContainer for \"a45cdafa2010727349ebeb6c1dbb6834b7c85685facdfa32544f8b49453712b8\" returns successfully" Aug 13 00:58:09.605836 kubelet[2240]: I0813 00:58:09.605142 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f5mjv" podStartSLOduration=45.605111764 podStartE2EDuration="45.605111764s" podCreationTimestamp="2025-08-13 00:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:58:09.604324211 +0000 UTC m=+49.742830497" watchObservedRunningTime="2025-08-13 00:58:09.605111764 +0000 UTC m=+49.743618055" Aug 13 00:58:09.853000 audit[4222]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:09.853000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe44db16f0 a2=0 a3=7ffe44db16dc items=0 ppid=2359 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:09.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:09.862000 audit[4222]: NETFILTER_CFG table=nat:110 family=2 entries=35 op=nft_register_chain pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:09.862000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe44db16f0 a2=0 a3=7ffe44db16dc items=0 ppid=2359 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:09.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:09.900000 audit[4225]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:09.900000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd78d50140 a2=0 a3=7ffd78d5012c items=0 ppid=2359 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:09.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:09.914000 audit[4225]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:09.914000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd78d50140 a2=0 a3=7ffd78d5012c items=0 ppid=2359 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:09.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:10.126584 env[1335]: time="2025-08-13T00:58:10.125687048Z" level=info msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" Aug 13 00:58:10.138706 env[1335]: time="2025-08-13T00:58:10.138632849Z" level=info msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" Aug 13 00:58:10.208829 env[1335]: time="2025-08-13T00:58:10.208772076Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:10.214070 env[1335]: time="2025-08-13T00:58:10.214016094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:10.220671 env[1335]: time="2025-08-13T00:58:10.220619206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:10.228690 env[1335]: time="2025-08-13T00:58:10.228634642Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:10.231164 env[1335]: time="2025-08-13T00:58:10.231101636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:58:10.248694 env[1335]: time="2025-08-13T00:58:10.246995665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:58:10.261433 env[1335]: time="2025-08-13T00:58:10.261359128Z" level=info msg="CreateContainer within sandbox \"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:58:10.326615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410702514.mount: Deactivated successfully. Aug 13 00:58:10.358816 env[1335]: time="2025-08-13T00:58:10.358728758Z" level=info msg="CreateContainer within sandbox \"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c476e4e2549217bdb1c67b94d1330b7acf0b0e872732e4a9cc1b9d322512046\"" Aug 13 00:58:10.362412 env[1335]: time="2025-08-13T00:58:10.362320827Z" level=info msg="StartContainer for \"9c476e4e2549217bdb1c67b94d1330b7acf0b0e872732e4a9cc1b9d322512046\"" Aug 13 00:58:10.486885 systemd-networkd[1087]: califf9fdfde503: Gained IPv6LL Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.318 [INFO][4244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.319 [INFO][4244] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" iface="eth0" netns="/var/run/netns/cni-32840c95-6cbc-e624-56bf-c83c62d3dc05" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.319 [INFO][4244] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" iface="eth0" netns="/var/run/netns/cni-32840c95-6cbc-e624-56bf-c83c62d3dc05" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.319 [INFO][4244] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" iface="eth0" netns="/var/run/netns/cni-32840c95-6cbc-e624-56bf-c83c62d3dc05" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.319 [INFO][4244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.319 [INFO][4244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.545 [INFO][4267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.547 [INFO][4267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.548 [INFO][4267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.561 [WARNING][4267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.562 [INFO][4267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.573 [INFO][4267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:10.579707 env[1335]: 2025-08-13 00:58:10.577 [INFO][4244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:10.586139 systemd[1]: run-netns-cni\x2d32840c95\x2d6cbc\x2de624\x2d56bf\x2dc83c62d3dc05.mount: Deactivated successfully. Aug 13 00:58:10.588575 env[1335]: time="2025-08-13T00:58:10.588518896Z" level=info msg="TearDown network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" successfully" Aug 13 00:58:10.590471 env[1335]: time="2025-08-13T00:58:10.590412849Z" level=info msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" returns successfully" Aug 13 00:58:10.596166 env[1335]: time="2025-08-13T00:58:10.596105991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c744b486-wkm2q,Uid:e00a54c4-ec09-455a-86b9-9b9e86402f95,Namespace:calico-system,Attempt:1,}" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.274 [INFO][4249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.274 [INFO][4249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" iface="eth0" netns="/var/run/netns/cni-3fb725fa-4de8-106e-9e38-5e4d0722ee2a" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.274 [INFO][4249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" iface="eth0" netns="/var/run/netns/cni-3fb725fa-4de8-106e-9e38-5e4d0722ee2a" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.275 [INFO][4249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" iface="eth0" netns="/var/run/netns/cni-3fb725fa-4de8-106e-9e38-5e4d0722ee2a" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.275 [INFO][4249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.275 [INFO][4249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.568 [INFO][4263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.569 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.575 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.602 [WARNING][4263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.603 [INFO][4263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.609 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:10.614278 env[1335]: 2025-08-13 00:58:10.611 [INFO][4249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:10.620903 env[1335]: time="2025-08-13T00:58:10.620834105Z" level=info msg="TearDown network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" successfully" Aug 13 00:58:10.621144 env[1335]: time="2025-08-13T00:58:10.621108078Z" level=info msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" returns successfully" Aug 13 00:58:10.621786 systemd[1]: run-netns-cni\x2d3fb725fa\x2d4de8\x2d106e\x2d9e38\x2d5e4d0722ee2a.mount: Deactivated successfully. Aug 13 00:58:10.627535 env[1335]: time="2025-08-13T00:58:10.627481151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpcml,Uid:8da3e8d7-b831-4716-b890-6a89d4b7984d,Namespace:calico-system,Attempt:1,}" Aug 13 00:58:10.889660 env[1335]: time="2025-08-13T00:58:10.889564185Z" level=info msg="StartContainer for \"9c476e4e2549217bdb1c67b94d1330b7acf0b0e872732e4a9cc1b9d322512046\" returns successfully" Aug 13 00:58:11.091153 systemd-networkd[1087]: cali70f01647dbb: Link UP Aug 13 00:58:11.108409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:58:11.108540 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali70f01647dbb: link becomes ready Aug 13 00:58:11.120427 systemd-networkd[1087]: cali70f01647dbb: Gained carrier Aug 13 00:58:11.121897 env[1335]: time="2025-08-13T00:58:11.121825624Z" level=info msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.916 [INFO][4305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0 goldmane-58fd7646b9- calico-system 8da3e8d7-b831-4716-b890-6a89d4b7984d 947 0 2025-08-13 00:57:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal goldmane-58fd7646b9-cpcml eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali70f01647dbb [] [] }} ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.924 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.995 [INFO][4338] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" HandleID="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.996 [INFO][4338] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" HandleID="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"goldmane-58fd7646b9-cpcml", "timestamp":"2025-08-13 00:58:10.995566262 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.996 [INFO][4338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.996 [INFO][4338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:10.996 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.011 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.018 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.026 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.030 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.037 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.037 [INFO][4338] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.039 [INFO][4338] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3 Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.051 [INFO][4338] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.064 [INFO][4338] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.064 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.064 [INFO][4338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:11.152181 env[1335]: 2025-08-13 00:58:11.064 [INFO][4338] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" HandleID="k8s-pod-network.77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.071 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8da3e8d7-b831-4716-b890-6a89d4b7984d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-58fd7646b9-cpcml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70f01647dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.072 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.6/32] ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.072 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f01647dbb ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.122 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.122 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8da3e8d7-b831-4716-b890-6a89d4b7984d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3", Pod:"goldmane-58fd7646b9-cpcml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70f01647dbb", MAC:"0e:b5:12:cb:8a:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:11.153773 env[1335]: 2025-08-13 00:58:11.147 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpcml" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:11.214410 systemd-networkd[1087]: caliaa5a38ae2e3: Link UP Aug 13 00:58:11.224655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa5a38ae2e3: link becomes ready Aug 13 00:58:11.229567 systemd-networkd[1087]: caliaa5a38ae2e3: Gained carrier Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:10.911 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0 calico-kube-controllers-c744b486- calico-system e00a54c4-ec09-455a-86b9-9b9e86402f95 949 0 2025-08-13 00:57:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c744b486 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal calico-kube-controllers-c744b486-wkm2q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa5a38ae2e3 [] [] }} ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:10.911 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.034 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" HandleID="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.047 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" HandleID="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"calico-kube-controllers-c744b486-wkm2q", "timestamp":"2025-08-13 00:58:11.03309243 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.047 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.064 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.064 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.116 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.146 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.160 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.163 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.167 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.167 [INFO][4336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.169 [INFO][4336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667 Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.176 [INFO][4336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.189 [INFO][4336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.7/26] block=192.168.75.0/26 handle="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.189 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.7/26] handle="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.190 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:11.264961 env[1335]: 2025-08-13 00:58:11.190 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.7/26] IPv6=[] ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" HandleID="k8s-pod-network.e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.193 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0", GenerateName:"calico-kube-controllers-c744b486-", Namespace:"calico-system", SelfLink:"", UID:"e00a54c4-ec09-455a-86b9-9b9e86402f95", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c744b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-c744b486-wkm2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5a38ae2e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.194 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.7/32] ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.194 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa5a38ae2e3 ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.237 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.238 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0", GenerateName:"calico-kube-controllers-c744b486-", Namespace:"calico-system", SelfLink:"", UID:"e00a54c4-ec09-455a-86b9-9b9e86402f95", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c744b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667", Pod:"calico-kube-controllers-c744b486-wkm2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5a38ae2e3", MAC:"62:e8:6e:98:85:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:11.266229 env[1335]: 2025-08-13 00:58:11.254 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667" Namespace="calico-system" Pod="calico-kube-controllers-c744b486-wkm2q" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:11.313095 env[1335]: time="2025-08-13T00:58:11.312980194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:11.313380 env[1335]: time="2025-08-13T00:58:11.313341828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:11.313581 env[1335]: time="2025-08-13T00:58:11.313547114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:11.314022 env[1335]: time="2025-08-13T00:58:11.313978627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3 pid=4394 runtime=io.containerd.runc.v2 Aug 13 00:58:11.359000 audit[4415]: NETFILTER_CFG table=filter:113 family=2 entries=84 op=nft_register_chain pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:11.366956 kernel: kauditd_printk_skb: 26 callbacks suppressed Aug 13 00:58:11.367173 kernel: audit: type=1325 audit(1755046691.359:403): table=filter:113 family=2 entries=84 op=nft_register_chain pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:11.414160 env[1335]: time="2025-08-13T00:58:11.411770277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:11.414160 env[1335]: time="2025-08-13T00:58:11.411835243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:11.414160 env[1335]: time="2025-08-13T00:58:11.411860810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:11.414668 env[1335]: time="2025-08-13T00:58:11.414579353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667 pid=4416 runtime=io.containerd.runc.v2 Aug 13 00:58:11.469897 kernel: audit: type=1300 audit(1755046691.359:403): arch=c000003e syscall=46 success=yes exit=45404 a0=3 a1=7ffc23ba26c0 a2=0 a3=7ffc23ba26ac items=0 ppid=3555 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.359000 audit[4415]: SYSCALL arch=c000003e syscall=46 success=yes exit=45404 a0=3 a1=7ffc23ba26c0 a2=0 a3=7ffc23ba26ac items=0 ppid=3555 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.359000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:11.514665 kernel: audit: type=1327 audit(1755046691.359:403): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:11.593445 systemd[1]: run-containerd-runc-k8s.io-77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3-runc.qP9LJ1.mount: Deactivated successfully. Aug 13 00:58:11.677217 kubelet[2240]: I0813 00:58:11.676623 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7cc8c448-5tbcd" podStartSLOduration=30.581128474 podStartE2EDuration="34.676577667s" podCreationTimestamp="2025-08-13 00:57:37 +0000 UTC" firstStartedPulling="2025-08-13 00:58:06.141372855 +0000 UTC m=+46.279879120" lastFinishedPulling="2025-08-13 00:58:10.236822041 +0000 UTC m=+50.375328313" observedRunningTime="2025-08-13 00:58:11.665339452 +0000 UTC m=+51.803845741" watchObservedRunningTime="2025-08-13 00:58:11.676577667 +0000 UTC m=+51.815083962" Aug 13 00:58:11.774000 audit[4458]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:11.793715 kernel: audit: type=1325 audit(1755046691.774:404): table=filter:114 family=2 entries=14 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:11.774000 audit[4458]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffbddfd530 a2=0 a3=7fffbddfd51c items=0 ppid=2359 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.835440 kernel: audit: type=1300 audit(1755046691.774:404): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffbddfd530 a2=0 a3=7fffbddfd51c items=0 ppid=2359 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:11.796000 audit[4458]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:11.867521 kernel: audit: type=1327 audit(1755046691.774:404): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:11.868773 kernel: audit: type=1325 audit(1755046691.796:405): table=nat:115 family=2 entries=20 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:11.868848 kernel: audit: type=1300 audit(1755046691.796:405): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffbddfd530 a2=0 a3=7fffbddfd51c items=0 ppid=2359 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.796000 audit[4458]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffbddfd530 a2=0 a3=7fffbddfd51c items=0 ppid=2359 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:11.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:11.917705 kernel: audit: type=1327 audit(1755046691.796:405): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.680 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.680 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" iface="eth0" netns="/var/run/netns/cni-c1db6a9a-cdb3-b00c-ede3-43962d2dc4cd" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.681 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" iface="eth0" netns="/var/run/netns/cni-c1db6a9a-cdb3-b00c-ede3-43962d2dc4cd" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.682 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" iface="eth0" netns="/var/run/netns/cni-c1db6a9a-cdb3-b00c-ede3-43962d2dc4cd" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.684 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.684 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.904 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.911 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.911 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.936 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.936 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.941 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:11.953552 env[1335]: 2025-08-13 00:58:11.947 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:11.953552 env[1335]: time="2025-08-13T00:58:11.952606327Z" level=info msg="TearDown network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" successfully" Aug 13 00:58:11.953552 env[1335]: time="2025-08-13T00:58:11.952660969Z" level=info msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" returns successfully" Aug 13 00:58:11.954528 env[1335]: time="2025-08-13T00:58:11.953780103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mbnvb,Uid:2b8de9e1-b570-4daf-8356-82b085f5759f,Namespace:kube-system,Attempt:1,}" Aug 13 00:58:12.096411 env[1335]: time="2025-08-13T00:58:12.096344721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c744b486-wkm2q,Uid:e00a54c4-ec09-455a-86b9-9b9e86402f95,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667\"" Aug 13 00:58:12.101711 env[1335]: time="2025-08-13T00:58:12.101655252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpcml,Uid:8da3e8d7-b831-4716-b890-6a89d4b7984d,Namespace:calico-system,Attempt:1,} returns sandbox id \"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3\"" Aug 13 00:58:12.315333 systemd-networkd[1087]: cali31a95140d75: Link UP Aug 13 00:58:12.328616 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:58:12.328783 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali31a95140d75: link becomes ready Aug 13 00:58:12.339407 systemd-networkd[1087]: cali31a95140d75: Gained carrier Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.132 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0 coredns-7c65d6cfc9- kube-system 2b8de9e1-b570-4daf-8356-82b085f5759f 966 0 2025-08-13 00:57:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal coredns-7c65d6cfc9-mbnvb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31a95140d75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.132 [INFO][4470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.209 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" HandleID="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.209 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" HandleID="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003299b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-mbnvb", "timestamp":"2025-08-13 00:58:12.209199742 +0000 UTC"}, Hostname:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.209 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.210 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.210 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal' Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.240 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.248 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.262 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.265 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.269 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.269 [INFO][4494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.271 [INFO][4494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90 Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.278 [INFO][4494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.290 [INFO][4494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.8/26] block=192.168.75.0/26 handle="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.290 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.8/26] handle="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" host="ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal" Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.290 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:12.389950 env[1335]: 2025-08-13 00:58:12.290 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.8/26] IPv6=[] ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" HandleID="k8s-pod-network.82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.292 [INFO][4470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b8de9e1-b570-4daf-8356-82b085f5759f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-mbnvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a95140d75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.293 [INFO][4470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.8/32] ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.293 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31a95140d75 ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.356 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.357 [INFO][4470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b8de9e1-b570-4daf-8356-82b085f5759f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90", Pod:"coredns-7c65d6cfc9-mbnvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a95140d75", MAC:"a6:d6:b1:1e:32:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:12.427119 env[1335]: 2025-08-13 00:58:12.374 [INFO][4470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mbnvb" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:12.415206 systemd[1]: run-containerd-runc-k8s.io-e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667-runc.anLIOl.mount: Deactivated successfully. Aug 13 00:58:12.415447 systemd[1]: run-netns-cni\x2dc1db6a9a\x2dcdb3\x2db00c\x2dede3\x2d43962d2dc4cd.mount: Deactivated successfully. Aug 13 00:58:12.496669 kernel: audit: type=1325 audit(1755046692.475:406): table=filter:116 family=2 entries=58 op=nft_register_chain pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:12.475000 audit[4509]: NETFILTER_CFG table=filter:116 family=2 entries=58 op=nft_register_chain pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:58:12.475000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7ffe126a7120 a2=0 a3=7ffe126a710c items=0 ppid=3555 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:12.475000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:58:12.527771 env[1335]: time="2025-08-13T00:58:12.520229641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:58:12.527771 env[1335]: time="2025-08-13T00:58:12.520286114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:58:12.527771 env[1335]: time="2025-08-13T00:58:12.520306212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:58:12.527771 env[1335]: time="2025-08-13T00:58:12.520510164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90 pid=4518 runtime=io.containerd.runc.v2 Aug 13 00:58:12.658649 systemd[1]: run-containerd-runc-k8s.io-82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90-runc.Zf5gKp.mount: Deactivated successfully. Aug 13 00:58:12.682101 kubelet[2240]: I0813 00:58:12.682020 2240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:58:12.749100 env[1335]: time="2025-08-13T00:58:12.749020442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mbnvb,Uid:2b8de9e1-b570-4daf-8356-82b085f5759f,Namespace:kube-system,Attempt:1,} returns sandbox id \"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90\"" Aug 13 00:58:12.754643 env[1335]: time="2025-08-13T00:58:12.754186489Z" level=info msg="CreateContainer within sandbox \"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:58:12.800048 env[1335]: time="2025-08-13T00:58:12.793741874Z" level=info msg="CreateContainer within sandbox \"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa50f8959e8e650e59df482b9bfc8b4587adca7fee18af8b0dc58747b367846d\"" Aug 13 00:58:12.800048 env[1335]: time="2025-08-13T00:58:12.795058823Z" level=info msg="StartContainer for \"aa50f8959e8e650e59df482b9bfc8b4587adca7fee18af8b0dc58747b367846d\"" Aug 13 00:58:12.959626 env[1335]: time="2025-08-13T00:58:12.958953757Z" level=info msg="StartContainer for \"aa50f8959e8e650e59df482b9bfc8b4587adca7fee18af8b0dc58747b367846d\" returns successfully" Aug 13 00:58:12.981635 systemd-networkd[1087]: cali70f01647dbb: Gained IPv6LL Aug 13 00:58:13.237557 systemd-networkd[1087]: caliaa5a38ae2e3: Gained IPv6LL Aug 13 00:58:13.625362 systemd-networkd[1087]: cali31a95140d75: Gained IPv6LL Aug 13 00:58:13.744216 kubelet[2240]: I0813 00:58:13.743231 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mbnvb" podStartSLOduration=49.74320351 podStartE2EDuration="49.74320351s" podCreationTimestamp="2025-08-13 00:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:58:13.705165225 +0000 UTC m=+53.843671515" watchObservedRunningTime="2025-08-13 00:58:13.74320351 +0000 UTC m=+53.881709801" Aug 13 00:58:13.778000 audit[4589]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:13.778000 audit[4589]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1f5a7c50 a2=0 a3=7ffc1f5a7c3c items=0 ppid=2359 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:13.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:13.783000 audit[4589]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:13.783000 audit[4589]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc1f5a7c50 a2=0 a3=7ffc1f5a7c3c items=0 ppid=2359 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:13.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:13.803000 audit[4591]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:13.803000 audit[4591]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff3f4d56b0 a2=0 a3=7fff3f4d569c items=0 ppid=2359 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:13.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:13.826000 audit[4591]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:13.826000 audit[4591]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff3f4d56b0 a2=0 a3=7fff3f4d569c items=0 ppid=2359 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:13.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:14.186000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47543768.mount: Deactivated successfully. Aug 13 00:58:14.214239 env[1335]: time="2025-08-13T00:58:14.214139146Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.217512 env[1335]: time="2025-08-13T00:58:14.217434204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.220197 env[1335]: time="2025-08-13T00:58:14.220127283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.222782 env[1335]: time="2025-08-13T00:58:14.222705490Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.223711 env[1335]: time="2025-08-13T00:58:14.223642854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:58:14.227869 env[1335]: time="2025-08-13T00:58:14.226122680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:58:14.229392 env[1335]: time="2025-08-13T00:58:14.229285746Z" level=info msg="CreateContainer within sandbox \"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:58:14.256559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646955955.mount: Deactivated successfully. Aug 13 00:58:14.258531 env[1335]: time="2025-08-13T00:58:14.258456530Z" level=info msg="CreateContainer within sandbox \"5933ad8d63f1c1abd09a97bf90a1e7e53ec50f1869d04382b1022a6049fe1535\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"52ba59d33136b88de7c6b7df4e62f501bff40aed4439bd2ec6037d303f12899a\"" Aug 13 00:58:14.262426 env[1335]: time="2025-08-13T00:58:14.262372234Z" level=info msg="StartContainer for \"52ba59d33136b88de7c6b7df4e62f501bff40aed4439bd2ec6037d303f12899a\"" Aug 13 00:58:14.380064 env[1335]: time="2025-08-13T00:58:14.380001995Z" level=info msg="StartContainer for \"52ba59d33136b88de7c6b7df4e62f501bff40aed4439bd2ec6037d303f12899a\" returns successfully" Aug 13 00:58:14.508943 env[1335]: time="2025-08-13T00:58:14.507904361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.512512 env[1335]: time="2025-08-13T00:58:14.512437372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.515624 env[1335]: time="2025-08-13T00:58:14.515490676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.517976 env[1335]: time="2025-08-13T00:58:14.517917883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:14.518834 env[1335]: time="2025-08-13T00:58:14.518768167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:58:14.523819 env[1335]: time="2025-08-13T00:58:14.522204666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:58:14.526482 env[1335]: time="2025-08-13T00:58:14.525863073Z" level=info msg="CreateContainer within sandbox \"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:58:14.556670 env[1335]: time="2025-08-13T00:58:14.556561434Z" level=info msg="CreateContainer within sandbox \"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c18b4b9818183d4c1962b80cd24bfef1e46d22341e5ea2dfb034d32d293f8ef\"" Aug 13 00:58:14.561243 env[1335]: time="2025-08-13T00:58:14.560646709Z" level=info msg="StartContainer for \"9c18b4b9818183d4c1962b80cd24bfef1e46d22341e5ea2dfb034d32d293f8ef\"" Aug 13 00:58:14.746951 env[1335]: time="2025-08-13T00:58:14.746869820Z" level=info msg="StartContainer for \"9c18b4b9818183d4c1962b80cd24bfef1e46d22341e5ea2dfb034d32d293f8ef\" returns successfully" Aug 13 00:58:14.771000 audit[4659]: NETFILTER_CFG table=filter:121 family=2 entries=13 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:14.771000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc6c691060 a2=0 a3=7ffc6c69104c items=0 ppid=2359 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:14.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:14.777000 audit[4659]: NETFILTER_CFG table=nat:122 family=2 entries=27 op=nft_register_chain pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:14.777000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc6c691060 a2=0 a3=7ffc6c69104c items=0 ppid=2359 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:14.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:15.401747 systemd[1]: run-containerd-runc-k8s.io-9c18b4b9818183d4c1962b80cd24bfef1e46d22341e5ea2dfb034d32d293f8ef-runc.rXfDbO.mount: Deactivated successfully. Aug 13 00:58:15.753625 kubelet[2240]: I0813 00:58:15.746639 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7cc8c448-zvxvr" podStartSLOduration=31.221962332 podStartE2EDuration="38.746577435s" podCreationTimestamp="2025-08-13 00:57:37 +0000 UTC" firstStartedPulling="2025-08-13 00:58:06.996231469 +0000 UTC m=+47.134737749" lastFinishedPulling="2025-08-13 00:58:14.520846563 +0000 UTC m=+54.659352852" observedRunningTime="2025-08-13 00:58:15.744230737 +0000 UTC m=+55.882737027" watchObservedRunningTime="2025-08-13 00:58:15.746577435 +0000 UTC m=+55.885083725" Aug 13 00:58:15.753625 kubelet[2240]: I0813 00:58:15.747081 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bd8966bb5-lxg8k" podStartSLOduration=3.01414308 podStartE2EDuration="12.747064219s" podCreationTimestamp="2025-08-13 00:58:03 +0000 UTC" firstStartedPulling="2025-08-13 00:58:04.492521331 +0000 UTC m=+44.631027598" lastFinishedPulling="2025-08-13 00:58:14.225442456 +0000 UTC m=+54.363948737" observedRunningTime="2025-08-13 00:58:14.730613884 +0000 UTC m=+54.869120178" watchObservedRunningTime="2025-08-13 00:58:15.747064219 +0000 UTC m=+55.885570510" Aug 13 00:58:15.823000 audit[4670]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=4670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:15.823000 audit[4670]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe99b0e3d0 a2=0 a3=7ffe99b0e3bc items=0 ppid=2359 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:15.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:15.829000 audit[4670]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=4670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:15.829000 audit[4670]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe99b0e3d0 a2=0 a3=7ffe99b0e3bc items=0 ppid=2359 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:15.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:16.302278 env[1335]: time="2025-08-13T00:58:16.302151304Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:16.331086 env[1335]: time="2025-08-13T00:58:16.331029882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:16.333981 env[1335]: time="2025-08-13T00:58:16.333819341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:16.336756 env[1335]: time="2025-08-13T00:58:16.336705448Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:16.337471 env[1335]: time="2025-08-13T00:58:16.337424541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:58:16.341800 env[1335]: time="2025-08-13T00:58:16.341745870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:58:16.347400 env[1335]: time="2025-08-13T00:58:16.347322432Z" level=info msg="CreateContainer within sandbox \"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:58:16.991000 audit[4679]: NETFILTER_CFG table=filter:125 family=2 entries=11 op=nft_register_rule pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:16.999065 kernel: kauditd_printk_skb: 26 callbacks suppressed Aug 13 00:58:16.999286 kernel: audit: type=1325 audit(1755046696.991:415): table=filter:125 family=2 entries=11 op=nft_register_rule pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:16.991000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff06a9dd70 a2=0 a3=7fff06a9dd5c items=0 ppid=2359 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:17.053676 kernel: audit: type=1300 audit(1755046696.991:415): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff06a9dd70 a2=0 a3=7fff06a9dd5c items=0 ppid=2359 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:16.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:17.070653 kernel: audit: type=1327 audit(1755046696.991:415): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:17.122799 kernel: audit: type=1325 audit(1755046697.019:416): table=nat:126 family=2 entries=29 op=nft_register_chain pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:17.122923 kernel: audit: type=1300 audit(1755046697.019:416): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff06a9dd70 a2=0 a3=7fff06a9dd5c items=0 ppid=2359 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:17.019000 audit[4679]: NETFILTER_CFG table=nat:126 family=2 entries=29 op=nft_register_chain pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:17.019000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff06a9dd70 a2=0 a3=7fff06a9dd5c items=0 ppid=2359 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:17.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:17.140148 kernel: audit: type=1327 audit(1755046697.019:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:17.895325 env[1335]: time="2025-08-13T00:58:17.895224716Z" level=info msg="CreateContainer within sandbox \"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a039bcdb92b7926ba77423de8f1439e66e4d9fdf562badf7a05cbaaf9fa7591\"" Aug 13 00:58:17.896937 env[1335]: time="2025-08-13T00:58:17.896888051Z" level=info msg="StartContainer for \"8a039bcdb92b7926ba77423de8f1439e66e4d9fdf562badf7a05cbaaf9fa7591\"" Aug 13 00:58:17.962639 systemd[1]: run-containerd-runc-k8s.io-8a039bcdb92b7926ba77423de8f1439e66e4d9fdf562badf7a05cbaaf9fa7591-runc.gvZc4x.mount: Deactivated successfully. Aug 13 00:58:18.044002 env[1335]: time="2025-08-13T00:58:18.043832308Z" level=info msg="StartContainer for \"8a039bcdb92b7926ba77423de8f1439e66e4d9fdf562badf7a05cbaaf9fa7591\" returns successfully" Aug 13 00:58:20.096950 env[1335]: time="2025-08-13T00:58:20.096860067Z" level=info msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.197 [WARNING][4725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"281890f3-f0b5-4757-b7db-b03ab8faf735", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932", Pod:"csi-node-driver-hv7wr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie141a1ff581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.197 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.197 [INFO][4725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" iface="eth0" netns="" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.198 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.198 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.260 [INFO][4734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.260 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.260 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.270 [WARNING][4734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.271 [INFO][4734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.273 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:20.277567 env[1335]: 2025-08-13 00:58:20.275 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.278522 env[1335]: time="2025-08-13T00:58:20.277569573Z" level=info msg="TearDown network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" successfully" Aug 13 00:58:20.278522 env[1335]: time="2025-08-13T00:58:20.277634965Z" level=info msg="StopPodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" returns successfully" Aug 13 00:58:20.286303 env[1335]: time="2025-08-13T00:58:20.286251976Z" level=info msg="RemovePodSandbox for \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" Aug 13 00:58:20.286737 env[1335]: time="2025-08-13T00:58:20.286563254Z" level=info msg="Forcibly stopping sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\"" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.398 [WARNING][4749] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"281890f3-f0b5-4757-b7db-b03ab8faf735", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932", Pod:"csi-node-driver-hv7wr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie141a1ff581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.399 [INFO][4749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.399 [INFO][4749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" iface="eth0" netns="" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.399 [INFO][4749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.399 [INFO][4749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.452 [INFO][4756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.452 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.453 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.463 [WARNING][4756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.463 [INFO][4756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" HandleID="k8s-pod-network.543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-csi--node--driver--hv7wr-eth0" Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.466 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:20.470633 env[1335]: 2025-08-13 00:58:20.468 [INFO][4749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78" Aug 13 00:58:20.471550 env[1335]: time="2025-08-13T00:58:20.470680524Z" level=info msg="TearDown network for sandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" successfully" Aug 13 00:58:20.476991 env[1335]: time="2025-08-13T00:58:20.476927394Z" level=info msg="RemovePodSandbox \"543a87deaa020693315c3161a9f81d0296ea8378713791ec6b3107cc8ed65f78\" returns successfully" Aug 13 00:58:20.477670 env[1335]: time="2025-08-13T00:58:20.477628652Z" level=info msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.563 [WARNING][4771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8da3e8d7-b831-4716-b890-6a89d4b7984d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3", Pod:"goldmane-58fd7646b9-cpcml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70f01647dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.563 [INFO][4771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.563 [INFO][4771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" iface="eth0" netns="" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.563 [INFO][4771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.563 [INFO][4771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.627 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.634 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.634 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.649 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.649 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.653 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:20.659151 env[1335]: 2025-08-13 00:58:20.656 [INFO][4771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.660299 env[1335]: time="2025-08-13T00:58:20.660248889Z" level=info msg="TearDown network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" successfully" Aug 13 00:58:20.660441 env[1335]: time="2025-08-13T00:58:20.660411352Z" level=info msg="StopPodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" returns successfully" Aug 13 00:58:20.661253 env[1335]: time="2025-08-13T00:58:20.661217510Z" level=info msg="RemovePodSandbox for \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" Aug 13 00:58:20.661472 env[1335]: time="2025-08-13T00:58:20.661416758Z" level=info msg="Forcibly stopping sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\"" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.749 [WARNING][4794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8da3e8d7-b831-4716-b890-6a89d4b7984d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3", Pod:"goldmane-58fd7646b9-cpcml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70f01647dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.749 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.749 [INFO][4794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" iface="eth0" netns="" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.749 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.749 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.811 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.812 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.812 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.822 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.822 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" HandleID="k8s-pod-network.f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--cpcml-eth0" Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.824 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:20.829214 env[1335]: 2025-08-13 00:58:20.826 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf" Aug 13 00:58:20.829214 env[1335]: time="2025-08-13T00:58:20.828000642Z" level=info msg="TearDown network for sandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" successfully" Aug 13 00:58:20.835822 env[1335]: time="2025-08-13T00:58:20.835753814Z" level=info msg="RemovePodSandbox \"f1b6ae3799bf073fa7c6be391079ce39c7d991efa14e3155457dae53203c7fcf\" returns successfully" Aug 13 00:58:20.836469 env[1335]: time="2025-08-13T00:58:20.836430567Z" level=info msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" Aug 13 00:58:20.860929 env[1335]: time="2025-08-13T00:58:20.860864029Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:20.864728 env[1335]: time="2025-08-13T00:58:20.864662665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:20.868617 env[1335]: time="2025-08-13T00:58:20.868525449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:20.870038 env[1335]: time="2025-08-13T00:58:20.869995636Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:20.871194 env[1335]: time="2025-08-13T00:58:20.871041824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:58:20.873489 env[1335]: time="2025-08-13T00:58:20.873450290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:58:20.909712 env[1335]: time="2025-08-13T00:58:20.909658227Z" level=info msg="CreateContainer within sandbox \"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:58:20.938046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211640592.mount: Deactivated successfully. Aug 13 00:58:20.950720 env[1335]: time="2025-08-13T00:58:20.950647166Z" level=info msg="CreateContainer within sandbox \"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871\"" Aug 13 00:58:20.955799 env[1335]: time="2025-08-13T00:58:20.955177273Z" level=info msg="StartContainer for \"49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871\"" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.022 [WARNING][4815] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.022 [INFO][4815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.022 [INFO][4815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" iface="eth0" netns="" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.022 [INFO][4815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.022 [INFO][4815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.094 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.094 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.094 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.104 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.105 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.107 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:21.111487 env[1335]: 2025-08-13 00:58:21.109 [INFO][4815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.113116 env[1335]: time="2025-08-13T00:58:21.113065536Z" level=info msg="TearDown network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" successfully" Aug 13 00:58:21.113279 env[1335]: time="2025-08-13T00:58:21.113237689Z" level=info msg="StopPodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" returns successfully" Aug 13 00:58:21.121666 env[1335]: time="2025-08-13T00:58:21.114044126Z" level=info msg="RemovePodSandbox for \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" Aug 13 00:58:21.121666 env[1335]: time="2025-08-13T00:58:21.114092502Z" level=info msg="Forcibly stopping sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\"" Aug 13 00:58:21.177243 env[1335]: time="2025-08-13T00:58:21.173910134Z" level=info msg="StartContainer for \"49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871\" returns successfully" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.183 [WARNING][4865] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" WorkloadEndpoint="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.183 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.183 [INFO][4865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" iface="eth0" netns="" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.183 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.183 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.235 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.235 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.235 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.251 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.251 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" HandleID="k8s-pod-network.d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-whisker--5654d7f6ff--28dlk-eth0" Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.253 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:21.256388 env[1335]: 2025-08-13 00:58:21.255 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c" Aug 13 00:58:21.257914 env[1335]: time="2025-08-13T00:58:21.257858712Z" level=info msg="TearDown network for sandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" successfully" Aug 13 00:58:21.266847 env[1335]: time="2025-08-13T00:58:21.266784824Z" level=info msg="RemovePodSandbox \"d58a790887d6252562798d9d64dcf1d2e26a5a00ed635d6a2ceab7038cc4d66c\" returns successfully" Aug 13 00:58:21.268192 env[1335]: time="2025-08-13T00:58:21.268120515Z" level=info msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.380 [WARNING][4906] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bb39f28-f779-40ce-a3ee-b95479595e66", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559", Pod:"calico-apiserver-5d7cc8c448-5tbcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3c5f93b933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.382 [INFO][4906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.383 [INFO][4906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" iface="eth0" netns="" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.383 [INFO][4906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.383 [INFO][4906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.435 [INFO][4916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.436 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.436 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.446 [WARNING][4916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.446 [INFO][4916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.449 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:21.454143 env[1335]: 2025-08-13 00:58:21.451 [INFO][4906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.455099 env[1335]: time="2025-08-13T00:58:21.454187032Z" level=info msg="TearDown network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" successfully" Aug 13 00:58:21.455099 env[1335]: time="2025-08-13T00:58:21.454225372Z" level=info msg="StopPodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" returns successfully" Aug 13 00:58:21.455099 env[1335]: time="2025-08-13T00:58:21.454846976Z" level=info msg="RemovePodSandbox for \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" Aug 13 00:58:21.455099 env[1335]: time="2025-08-13T00:58:21.454900891Z" level=info msg="Forcibly stopping sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\"" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.520 [WARNING][4930] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bb39f28-f779-40ce-a3ee-b95479595e66", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"f0de4bf65b54a3ea9798d5197710636a98ec6656464983b702564fd40b94d559", Pod:"calico-apiserver-5d7cc8c448-5tbcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3c5f93b933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.520 [INFO][4930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.520 [INFO][4930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" iface="eth0" netns="" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.520 [INFO][4930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.520 [INFO][4930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.563 [INFO][4937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.563 [INFO][4937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.563 [INFO][4937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.573 [WARNING][4937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.573 [INFO][4937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" HandleID="k8s-pod-network.a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--5tbcd-eth0" Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.576 [INFO][4937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:21.579577 env[1335]: 2025-08-13 00:58:21.578 [INFO][4930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05" Aug 13 00:58:21.580497 env[1335]: time="2025-08-13T00:58:21.579667684Z" level=info msg="TearDown network for sandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" successfully" Aug 13 00:58:21.586667 env[1335]: time="2025-08-13T00:58:21.586581146Z" level=info msg="RemovePodSandbox \"a90e1f469c5074e8af65ed1198399549b64849e22ae934ccabdc6741538f8a05\" returns successfully" Aug 13 00:58:21.590411 env[1335]: time="2025-08-13T00:58:21.590357817Z" level=info msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.657 [WARNING][4951] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a1beee6-8c8b-44a5-9d7c-8a8072355c14", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250", Pod:"calico-apiserver-5d7cc8c448-zvxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c8bdae2843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.657 [INFO][4951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.657 [INFO][4951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" iface="eth0" netns="" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.658 [INFO][4951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.658 [INFO][4951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.730 [INFO][4959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.731 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.731 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.749 [WARNING][4959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.749 [INFO][4959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.753 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:21.760795 env[1335]: 2025-08-13 00:58:21.755 [INFO][4951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:21.763406 env[1335]: time="2025-08-13T00:58:21.760761583Z" level=info msg="TearDown network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" successfully" Aug 13 00:58:21.763551 env[1335]: time="2025-08-13T00:58:21.763430995Z" level=info msg="StopPodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" returns successfully" Aug 13 00:58:21.765331 env[1335]: time="2025-08-13T00:58:21.765284739Z" level=info msg="RemovePodSandbox for \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" Aug 13 00:58:21.765506 env[1335]: time="2025-08-13T00:58:21.765335675Z" level=info msg="Forcibly stopping sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\"" Aug 13 00:58:21.988334 systemd[1]: run-containerd-runc-k8s.io-49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871-runc.MI6ryd.mount: Deactivated successfully. Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:21.921 [WARNING][4974] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0", GenerateName:"calico-apiserver-5d7cc8c448-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a1beee6-8c8b-44a5-9d7c-8a8072355c14", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7cc8c448", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"9da30e07d527c77a87a7269f0acfd87b08a4a2875b17de512f34fc4b750d4250", Pod:"calico-apiserver-5d7cc8c448-zvxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c8bdae2843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:21.922 [INFO][4974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:21.922 [INFO][4974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" iface="eth0" netns="" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:21.922 [INFO][4974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:21.923 [INFO][4974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.081 [INFO][4982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.081 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.082 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.110 [WARNING][4982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.110 [INFO][4982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" HandleID="k8s-pod-network.0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--apiserver--5d7cc8c448--zvxvr-eth0" Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.122 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:22.162449 env[1335]: 2025-08-13 00:58:22.125 [INFO][4974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7" Aug 13 00:58:22.164536 env[1335]: time="2025-08-13T00:58:22.162489309Z" level=info msg="TearDown network for sandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" successfully" Aug 13 00:58:22.284864 kubelet[2240]: I0813 00:58:22.284326 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c744b486-wkm2q" podStartSLOduration=30.51000667 podStartE2EDuration="39.284296743s" podCreationTimestamp="2025-08-13 00:57:43 +0000 UTC" firstStartedPulling="2025-08-13 00:58:12.098496247 +0000 UTC m=+52.237002515" lastFinishedPulling="2025-08-13 00:58:20.872786299 +0000 UTC m=+61.011292588" observedRunningTime="2025-08-13 00:58:21.854870845 +0000 UTC m=+61.993377135" watchObservedRunningTime="2025-08-13 00:58:22.284296743 +0000 UTC m=+62.422803035" Aug 13 00:58:22.328700 env[1335]: time="2025-08-13T00:58:22.328571738Z" level=info msg="RemovePodSandbox \"0b93e0501b70669c48a4a6625b1ed27e216ddbd18db24f359161d377cb766dc7\" returns successfully" Aug 13 00:58:22.351890 env[1335]: time="2025-08-13T00:58:22.351834502Z" level=info msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.452 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42c2f2da-5a83-4b40-aec1-478c8ca60301", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831", Pod:"coredns-7c65d6cfc9-f5mjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf9fdfde503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.452 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.452 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" iface="eth0" netns="" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.452 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.452 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.494 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.496 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.496 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.507 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.507 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.512 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:22.519514 env[1335]: 2025-08-13 00:58:22.515 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.520682 env[1335]: time="2025-08-13T00:58:22.520585628Z" level=info msg="TearDown network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" successfully" Aug 13 00:58:22.520852 env[1335]: time="2025-08-13T00:58:22.520813694Z" level=info msg="StopPodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" returns successfully" Aug 13 00:58:22.521849 env[1335]: time="2025-08-13T00:58:22.521790595Z" level=info msg="RemovePodSandbox for \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" Aug 13 00:58:22.522107 env[1335]: time="2025-08-13T00:58:22.522025081Z" level=info msg="Forcibly stopping sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\"" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.623 [WARNING][5040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42c2f2da-5a83-4b40-aec1-478c8ca60301", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"aefd6008989b572d1fafd506e83aee92c75e736e75334067273e3ad93927b831", Pod:"coredns-7c65d6cfc9-f5mjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf9fdfde503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.624 [INFO][5040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.624 [INFO][5040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" iface="eth0" netns="" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.624 [INFO][5040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.624 [INFO][5040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.664 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.664 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.664 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.680 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.680 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" HandleID="k8s-pod-network.4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--f5mjv-eth0" Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.683 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:22.687663 env[1335]: 2025-08-13 00:58:22.685 [INFO][5040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9" Aug 13 00:58:22.688754 env[1335]: time="2025-08-13T00:58:22.687718549Z" level=info msg="TearDown network for sandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" successfully" Aug 13 00:58:22.695032 env[1335]: time="2025-08-13T00:58:22.694950039Z" level=info msg="RemovePodSandbox \"4bb5f6ddd8202b103a92d2d3a20130b0eea8b5c0b7afad0dda1ee189746579a9\" returns successfully" Aug 13 00:58:22.696006 env[1335]: time="2025-08-13T00:58:22.695957917Z" level=info msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.813 [WARNING][5062] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b8de9e1-b570-4daf-8356-82b085f5759f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90", Pod:"coredns-7c65d6cfc9-mbnvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a95140d75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.814 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.814 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" iface="eth0" netns="" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.814 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.814 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.873 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.873 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.873 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.885 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.886 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.888 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:22.896514 env[1335]: 2025-08-13 00:58:22.893 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:22.897499 env[1335]: time="2025-08-13T00:58:22.896539371Z" level=info msg="TearDown network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" successfully" Aug 13 00:58:22.897499 env[1335]: time="2025-08-13T00:58:22.896584976Z" level=info msg="StopPodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" returns successfully" Aug 13 00:58:22.898199 env[1335]: time="2025-08-13T00:58:22.898142043Z" level=info msg="RemovePodSandbox for \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" Aug 13 00:58:22.898321 env[1335]: time="2025-08-13T00:58:22.898193890Z" level=info msg="Forcibly stopping sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\"" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:22.986 [WARNING][5085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b8de9e1-b570-4daf-8356-82b085f5759f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"82fedd09aabfac9c4bcbbfcd75148607d039a5ea9ef75a6de8dcef7ce8093f90", Pod:"coredns-7c65d6cfc9-mbnvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31a95140d75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:22.986 [INFO][5085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:22.986 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" iface="eth0" netns="" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:22.986 [INFO][5085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:22.986 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.082 [INFO][5092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.082 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.082 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.095 [WARNING][5092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.095 [INFO][5092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" HandleID="k8s-pod-network.4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--mbnvb-eth0" Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.097 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:23.100831 env[1335]: 2025-08-13 00:58:23.099 [INFO][5085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8" Aug 13 00:58:23.101773 env[1335]: time="2025-08-13T00:58:23.100869485Z" level=info msg="TearDown network for sandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" successfully" Aug 13 00:58:23.106764 env[1335]: time="2025-08-13T00:58:23.106678419Z" level=info msg="RemovePodSandbox \"4473b6c4b6add384b749743be34022d791fa8bbab8a729f84480886a54e123d8\" returns successfully" Aug 13 00:58:23.108285 env[1335]: time="2025-08-13T00:58:23.108245433Z" level=info msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.235 [WARNING][5108] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0", GenerateName:"calico-kube-controllers-c744b486-", Namespace:"calico-system", SelfLink:"", UID:"e00a54c4-ec09-455a-86b9-9b9e86402f95", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c744b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667", Pod:"calico-kube-controllers-c744b486-wkm2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5a38ae2e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.236 [INFO][5108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.236 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" iface="eth0" netns="" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.236 [INFO][5108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.236 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.296 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.297 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.297 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.310 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.310 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.313 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:23.319911 env[1335]: 2025-08-13 00:58:23.315 [INFO][5108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.319911 env[1335]: time="2025-08-13T00:58:23.317404722Z" level=info msg="TearDown network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" successfully" Aug 13 00:58:23.319911 env[1335]: time="2025-08-13T00:58:23.317451274Z" level=info msg="StopPodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" returns successfully" Aug 13 00:58:23.319911 env[1335]: time="2025-08-13T00:58:23.318146735Z" level=info msg="RemovePodSandbox for \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" Aug 13 00:58:23.319911 env[1335]: time="2025-08-13T00:58:23.318288521Z" level=info msg="Forcibly stopping sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\"" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.412 [WARNING][5131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0", GenerateName:"calico-kube-controllers-c744b486-", Namespace:"calico-system", SelfLink:"", UID:"e00a54c4-ec09-455a-86b9-9b9e86402f95", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c744b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-e550169b61dfc7af3211.c.flatcar-212911.internal", ContainerID:"e5f2be468d684f7d3ac3c3b0d793b2178dcd761179dca62ef1d3185ef5d6c667", Pod:"calico-kube-controllers-c744b486-wkm2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5a38ae2e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.412 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.412 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" iface="eth0" netns="" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.412 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.412 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.470 [INFO][5138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.471 [INFO][5138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.471 [INFO][5138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.483 [WARNING][5138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.483 [INFO][5138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" HandleID="k8s-pod-network.2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Workload="ci--3510--3--8--e550169b61dfc7af3211.c.flatcar--212911.internal-k8s-calico--kube--controllers--c744b486--wkm2q-eth0" Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.486 [INFO][5138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:58:23.490372 env[1335]: 2025-08-13 00:58:23.488 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d" Aug 13 00:58:23.491258 env[1335]: time="2025-08-13T00:58:23.490390592Z" level=info msg="TearDown network for sandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" successfully" Aug 13 00:58:23.496056 env[1335]: time="2025-08-13T00:58:23.495995717Z" level=info msg="RemovePodSandbox \"2b00e93bd85b0a60d35057c2a4980f3c2f7aa0240d1d18c6490f652c57a84c2d\" returns successfully" Aug 13 00:58:23.891711 systemd[1]: run-containerd-runc-k8s.io-49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871-runc.DHpxQK.mount: Deactivated successfully. Aug 13 00:58:24.068043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981075872.mount: Deactivated successfully. Aug 13 00:58:25.159058 env[1335]: time="2025-08-13T00:58:25.158956665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:25.162675 env[1335]: time="2025-08-13T00:58:25.162617399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:25.165988 env[1335]: time="2025-08-13T00:58:25.165919684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:25.168770 env[1335]: time="2025-08-13T00:58:25.168721878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:25.169838 env[1335]: time="2025-08-13T00:58:25.169779369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:58:25.175055 env[1335]: time="2025-08-13T00:58:25.174990453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:58:25.176738 env[1335]: time="2025-08-13T00:58:25.176669754Z" level=info msg="CreateContainer within sandbox \"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:58:25.208860 env[1335]: time="2025-08-13T00:58:25.208788331Z" level=info msg="CreateContainer within sandbox \"77561ac64f9a43a4f70580cd89cc5f68de48c6322eed5f23c36055fbf2b1e4c3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9\"" Aug 13 00:58:25.210411 env[1335]: time="2025-08-13T00:58:25.209999563Z" level=info msg="StartContainer for \"9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9\"" Aug 13 00:58:25.346662 env[1335]: time="2025-08-13T00:58:25.346505596Z" level=info msg="StartContainer for \"9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9\" returns successfully" Aug 13 00:58:26.030000 audit[5202]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:26.050922 kernel: audit: type=1325 audit(1755046706.030:417): table=filter:127 family=2 entries=10 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:26.030000 audit[5202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe11ea0180 a2=0 a3=7ffe11ea016c items=0 ppid=2359 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:26.088330 kernel: audit: type=1300 audit(1755046706.030:417): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe11ea0180 a2=0 a3=7ffe11ea016c items=0 ppid=2359 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:26.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:26.112666 kernel: audit: type=1327 audit(1755046706.030:417): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:26.049000 audit[5202]: NETFILTER_CFG table=nat:128 family=2 entries=24 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:26.131669 kernel: audit: type=1325 audit(1755046706.049:418): table=nat:128 family=2 entries=24 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:26.049000 audit[5202]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe11ea0180 a2=0 a3=7ffe11ea016c items=0 ppid=2359 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:26.190077 kernel: audit: type=1300 audit(1755046706.049:418): arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe11ea0180 a2=0 a3=7ffe11ea016c items=0 ppid=2359 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:26.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:26.208669 kernel: audit: type=1327 audit(1755046706.049:418): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:26.468428 kubelet[2240]: I0813 00:58:26.468335 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-cpcml" podStartSLOduration=31.401918638 podStartE2EDuration="44.468303338s" podCreationTimestamp="2025-08-13 00:57:42 +0000 UTC" firstStartedPulling="2025-08-13 00:58:12.1054818 +0000 UTC m=+52.243988064" lastFinishedPulling="2025-08-13 00:58:25.171866482 +0000 UTC m=+65.310372764" observedRunningTime="2025-08-13 00:58:25.990332791 +0000 UTC m=+66.128839081" watchObservedRunningTime="2025-08-13 00:58:26.468303338 +0000 UTC m=+66.606809629" Aug 13 00:58:26.727695 env[1335]: time="2025-08-13T00:58:26.727536528Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:26.731016 env[1335]: time="2025-08-13T00:58:26.730956781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:26.737563 env[1335]: time="2025-08-13T00:58:26.736910791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:26.740758 env[1335]: time="2025-08-13T00:58:26.740699014Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:58:26.741890 env[1335]: time="2025-08-13T00:58:26.741829468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:58:26.747241 env[1335]: time="2025-08-13T00:58:26.747153261Z" level=info msg="CreateContainer within sandbox \"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:58:26.775785 env[1335]: time="2025-08-13T00:58:26.775705411Z" level=info msg="CreateContainer within sandbox \"7c4f9bde752a3417b9e2a104bab908be9f4d7f2030a4ab70642d8eb51eb50932\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"76dbee4b2a5c3fa0c1dfff4ad27e9f85ad4ca101667c4c311f7860cb15f9d9d5\"" Aug 13 00:58:26.778495 env[1335]: time="2025-08-13T00:58:26.776698059Z" level=info msg="StartContainer for \"76dbee4b2a5c3fa0c1dfff4ad27e9f85ad4ca101667c4c311f7860cb15f9d9d5\"" Aug 13 00:58:26.876721 env[1335]: time="2025-08-13T00:58:26.876543090Z" level=info msg="StartContainer for \"76dbee4b2a5c3fa0c1dfff4ad27e9f85ad4ca101667c4c311f7860cb15f9d9d5\" returns successfully" Aug 13 00:58:26.985150 kubelet[2240]: I0813 00:58:26.984125 2240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hv7wr" podStartSLOduration=25.130895101 podStartE2EDuration="43.984093063s" podCreationTimestamp="2025-08-13 00:57:43 +0000 UTC" firstStartedPulling="2025-08-13 00:58:07.890203914 +0000 UTC m=+48.028710188" lastFinishedPulling="2025-08-13 00:58:26.743401861 +0000 UTC m=+66.881908150" observedRunningTime="2025-08-13 00:58:26.979950742 +0000 UTC m=+67.118457033" watchObservedRunningTime="2025-08-13 00:58:26.984093063 +0000 UTC m=+67.122599357" Aug 13 00:58:27.404896 kubelet[2240]: I0813 00:58:27.404855 2240 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:58:27.404896 kubelet[2240]: I0813 00:58:27.404898 2240 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:58:34.519139 systemd[1]: Started sshd@7-10.128.0.76:22-139.178.68.195:54950.service. Aug 13 00:58:34.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.76:22-139.178.68.195:54950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:34.545628 kernel: audit: type=1130 audit(1755046714.518:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.76:22-139.178.68.195:54950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:34.849000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:34.880637 kernel: audit: type=1101 audit(1755046714.849:420): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:34.882457 sshd[5305]: Accepted publickey for core from 139.178.68.195 port 54950 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:34.884336 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:34.883000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:34.913166 kernel: audit: type=1103 audit(1755046714.883:421): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:34.959271 kernel: audit: type=1006 audit(1755046714.883:422): pid=5305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Aug 13 00:58:34.959484 kernel: audit: type=1300 audit(1755046714.883:422): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66aa8d10 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:34.883000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66aa8d10 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:34.883000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:34.970812 kernel: audit: type=1327 audit(1755046714.883:422): proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:34.975995 systemd-logind[1323]: New session 8 of user core. Aug 13 00:58:34.976584 systemd[1]: Started session-8.scope. Aug 13 00:58:35.041220 kernel: audit: type=1105 audit(1755046715.003:423): pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.003000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.041000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.067705 kernel: audit: type=1103 audit(1755046715.041:424): pid=5308 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.528761 sshd[5305]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:35.530000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.564629 kernel: audit: type=1106 audit(1755046715.530:425): pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.567145 systemd-logind[1323]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:58:35.568546 systemd[1]: sshd@7-10.128.0.76:22-139.178.68.195:54950.service: Deactivated successfully. Aug 13 00:58:35.571036 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:58:35.573933 systemd-logind[1323]: Removed session 8. Aug 13 00:58:35.540000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.611819 kernel: audit: type=1104 audit(1755046715.540:426): pid=5305 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:35.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.76:22-139.178.68.195:54950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:40.577785 systemd[1]: Started sshd@8-10.128.0.76:22-139.178.68.195:56376.service. Aug 13 00:58:40.608975 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:58:40.609140 kernel: audit: type=1130 audit(1755046720.576:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.76:22-139.178.68.195:56376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:40.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.76:22-139.178.68.195:56376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:40.889000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:40.921272 sshd[5319]: Accepted publickey for core from 139.178.68.195 port 56376 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:40.921836 kernel: audit: type=1101 audit(1755046720.889:429): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:40.923514 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:40.921000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:40.944631 systemd[1]: Started session-9.scope. Aug 13 00:58:40.946923 systemd-logind[1323]: New session 9 of user core. Aug 13 00:58:40.970742 kernel: audit: type=1103 audit(1755046720.921:430): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.018545 kernel: audit: type=1006 audit(1755046720.921:431): pid=5319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Aug 13 00:58:40.921000 audit[5319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc12d06e30 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:41.056628 kernel: audit: type=1300 audit(1755046720.921:431): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc12d06e30 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:40.921000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:40.964000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.102149 kernel: audit: type=1327 audit(1755046720.921:431): proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:41.102336 kernel: audit: type=1105 audit(1755046720.964:432): pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.107037 kernel: audit: type=1103 audit(1755046720.969:433): pid=5322 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:40.969000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.393478 sshd[5319]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:41.395000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.429638 kernel: audit: type=1106 audit(1755046721.395:434): pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.397000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.434015 systemd[1]: sshd@8-10.128.0.76:22-139.178.68.195:56376.service: Deactivated successfully. Aug 13 00:58:41.435438 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:58:41.440565 systemd-logind[1323]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:58:41.442981 systemd-logind[1323]: Removed session 9. Aug 13 00:58:41.457631 kernel: audit: type=1104 audit(1755046721.397:435): pid=5319 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:41.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.76:22-139.178.68.195:56376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:46.441613 systemd[1]: Started sshd@9-10.128.0.76:22-139.178.68.195:56382.service. Aug 13 00:58:46.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.76:22-139.178.68.195:56382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:46.450309 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:58:46.450466 kernel: audit: type=1130 audit(1755046726.441:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.76:22-139.178.68.195:56382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:46.773000 audit[5338]: USER_ACCT pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.776370 sshd[5338]: Accepted publickey for core from 139.178.68.195 port 56382 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:46.804914 kernel: audit: type=1101 audit(1755046726.773:438): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.805804 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:46.816603 systemd[1]: Started session-10.scope. Aug 13 00:58:46.816920 systemd-logind[1323]: New session 10 of user core. Aug 13 00:58:46.803000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.869455 kernel: audit: type=1103 audit(1755046726.803:439): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.869632 kernel: audit: type=1006 audit(1755046726.804:440): pid=5338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 00:58:46.804000 audit[5338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffac4aac30 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:46.804000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:46.902766 kernel: audit: type=1300 audit(1755046726.804:440): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffac4aac30 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:46.902830 kernel: audit: type=1327 audit(1755046726.804:440): proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:46.832000 audit[5338]: USER_START pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.944686 kernel: audit: type=1105 audit(1755046726.832:441): pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.846000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:46.971628 kernel: audit: type=1103 audit(1755046726.846:442): pid=5341 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.157926 sshd[5338]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:47.160000 audit[5338]: USER_END pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.194637 kernel: audit: type=1106 audit(1755046727.160:443): pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.196137 systemd-logind[1323]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:58:47.203879 systemd[1]: Started sshd@10-10.128.0.76:22-139.178.68.195:56390.service. Aug 13 00:58:47.207429 systemd[1]: sshd@9-10.128.0.76:22-139.178.68.195:56382.service: Deactivated successfully. Aug 13 00:58:47.215017 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:58:47.230718 systemd-logind[1323]: Removed session 10. Aug 13 00:58:47.172000 audit[5338]: CRED_DISP pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.264795 kernel: audit: type=1104 audit(1755046727.172:444): pid=5338 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.76:22-139.178.68.195:56390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:47.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.76:22-139.178.68.195:56382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:47.535000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.539043 sshd[5349]: Accepted publickey for core from 139.178.68.195 port 56390 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:47.538000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.538000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5e80e390 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:47.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:47.547466 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:47.556539 systemd[1]: Started session-11.scope. Aug 13 00:58:47.559361 systemd-logind[1323]: New session 11 of user core. Aug 13 00:58:47.579000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.582000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.990753 sshd[5349]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:47.991000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.992000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:47.997332 systemd-logind[1323]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:58:47.997759 systemd[1]: sshd@10-10.128.0.76:22-139.178.68.195:56390.service: Deactivated successfully. Aug 13 00:58:47.999205 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:58:47.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.76:22-139.178.68.195:56390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:48.005844 systemd-logind[1323]: Removed session 11. Aug 13 00:58:48.033928 systemd[1]: Started sshd@11-10.128.0.76:22-139.178.68.195:56406.service. Aug 13 00:58:48.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.76:22-139.178.68.195:56406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:48.345000 audit[5362]: USER_ACCT pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.348283 sshd[5362]: Accepted publickey for core from 139.178.68.195 port 56406 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:48.347000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.347000 audit[5362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1e98b9e0 a2=3 a3=0 items=0 ppid=1 pid=5362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:48.347000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:48.349719 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:48.359971 systemd-logind[1323]: New session 12 of user core. Aug 13 00:58:48.361037 systemd[1]: Started session-12.scope. Aug 13 00:58:48.370000 audit[5362]: USER_START pid=5362 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.373000 audit[5365]: CRED_ACQ pid=5365 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.690539 sshd[5362]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:48.692000 audit[5362]: USER_END pid=5362 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.693000 audit[5362]: CRED_DISP pid=5362 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:48.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.76:22-139.178.68.195:56406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:48.698370 systemd[1]: sshd@11-10.128.0.76:22-139.178.68.195:56406.service: Deactivated successfully. Aug 13 00:58:48.700542 systemd-logind[1323]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:58:48.703190 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:58:48.704734 systemd-logind[1323]: Removed session 12. Aug 13 00:58:49.673974 kubelet[2240]: I0813 00:58:49.673923 2240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:58:49.776000 audit[5376]: NETFILTER_CFG table=filter:129 family=2 entries=10 op=nft_register_rule pid=5376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:49.776000 audit[5376]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe12133730 a2=0 a3=7ffe1213371c items=0 ppid=2359 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:49.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:49.782000 audit[5376]: NETFILTER_CFG table=nat:130 family=2 entries=36 op=nft_register_chain pid=5376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:49.782000 audit[5376]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffe12133730 a2=0 a3=7ffe1213371c items=0 ppid=2359 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:49.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:53.746690 kernel: kauditd_printk_skb: 29 callbacks suppressed Aug 13 00:58:53.746913 kernel: audit: type=1130 audit(1755046733.738:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.76:22-139.178.68.195:39786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:53.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.76:22-139.178.68.195:39786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:53.738321 systemd[1]: Started sshd@12-10.128.0.76:22-139.178.68.195:39786.service. Aug 13 00:58:53.854505 systemd[1]: run-containerd-runc-k8s.io-49035974361a83d3e5dd9c1de5c2f175d7975c7a15167e648a8b638ea0c7a871-runc.41aGEi.mount: Deactivated successfully. Aug 13 00:58:53.922087 systemd[1]: run-containerd-runc-k8s.io-9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9-runc.ZvAHqZ.mount: Deactivated successfully. Aug 13 00:58:54.075000 audit[5379]: USER_ACCT pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.106193 sshd[5379]: Accepted publickey for core from 139.178.68.195 port 39786 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:58:54.147651 kernel: audit: type=1101 audit(1755046734.075:467): pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.147000 audit[5379]: CRED_ACQ pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.174840 kernel: audit: type=1103 audit(1755046734.147:468): pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.148953 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:54.191919 kernel: audit: type=1006 audit(1755046734.147:469): pid=5379 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Aug 13 00:58:54.147000 audit[5379]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdafaadc40 a2=3 a3=0 items=0 ppid=1 pid=5379 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:54.221748 kernel: audit: type=1300 audit(1755046734.147:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdafaadc40 a2=3 a3=0 items=0 ppid=1 pid=5379 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:54.197653 systemd-logind[1323]: New session 13 of user core. Aug 13 00:58:54.198707 systemd[1]: Started session-13.scope. Aug 13 00:58:54.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:54.280430 kernel: audit: type=1327 audit(1755046734.147:469): proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:54.280677 kernel: audit: type=1105 audit(1755046734.225:470): pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.225000 audit[5379]: USER_START pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.228000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.305784 kernel: audit: type=1103 audit(1755046734.228:471): pid=5411 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.641905 sshd[5379]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:54.645000 audit[5379]: USER_END pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.681627 kernel: audit: type=1106 audit(1755046734.645:472): pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.682432 systemd[1]: sshd@12-10.128.0.76:22-139.178.68.195:39786.service: Deactivated successfully. Aug 13 00:58:54.685322 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:58:54.685994 systemd-logind[1323]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:58:54.688710 systemd-logind[1323]: Removed session 13. Aug 13 00:58:54.645000 audit[5379]: CRED_DISP pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.729638 kernel: audit: type=1104 audit(1755046734.645:473): pid=5379 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:54.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.76:22-139.178.68.195:39786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:54.913000 audit[5433]: NETFILTER_CFG table=filter:131 family=2 entries=9 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:54.913000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffee7be8870 a2=0 a3=7ffee7be885c items=0 ppid=2359 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:54.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:54.923000 audit[5433]: NETFILTER_CFG table=nat:132 family=2 entries=31 op=nft_register_chain pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:58:54.923000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffee7be8870 a2=0 a3=7ffee7be885c items=0 ppid=2359 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:54.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:58:59.689791 systemd[1]: Started sshd@13-10.128.0.76:22-139.178.68.195:39790.service. Aug 13 00:58:59.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.76:22-139.178.68.195:39790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:59.697292 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:58:59.697462 kernel: audit: type=1130 audit(1755046739.691:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.76:22-139.178.68.195:39790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:00.026000 audit[5455]: USER_ACCT pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.057117 sshd[5455]: Accepted publickey for core from 139.178.68.195 port 39790 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:00.057767 kernel: audit: type=1101 audit(1755046740.026:478): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.060319 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:00.059000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.087629 kernel: audit: type=1103 audit(1755046740.059:479): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.134515 kernel: audit: type=1006 audit(1755046740.059:480): pid=5455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 00:59:00.134715 kernel: audit: type=1300 audit(1755046740.059:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef84b1240 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:00.059000 audit[5455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef84b1240 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:00.151843 kernel: audit: type=1327 audit(1755046740.059:480): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:00.059000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:00.156761 systemd-logind[1323]: New session 14 of user core. Aug 13 00:59:00.158323 systemd[1]: Started session-14.scope. Aug 13 00:59:00.174000 audit[5455]: USER_START pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.179000 audit[5458]: CRED_ACQ pid=5458 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.232772 kernel: audit: type=1105 audit(1755046740.174:481): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.232958 kernel: audit: type=1103 audit(1755046740.179:482): pid=5458 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.590895 sshd[5455]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:00.592000 audit[5455]: USER_END pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.626827 kernel: audit: type=1106 audit(1755046740.592:483): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.628065 systemd[1]: sshd@13-10.128.0.76:22-139.178.68.195:39790.service: Deactivated successfully. Aug 13 00:59:00.630747 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:59:00.631094 systemd-logind[1323]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:59:00.622000 audit[5455]: CRED_DISP pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.658951 kernel: audit: type=1104 audit(1755046740.622:484): pid=5455 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:00.660332 systemd-logind[1323]: Removed session 14. Aug 13 00:59:00.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.76:22-139.178.68.195:39790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:05.636739 systemd[1]: Started sshd@14-10.128.0.76:22-139.178.68.195:55346.service. Aug 13 00:59:05.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.76:22-139.178.68.195:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:05.643244 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:59:05.643414 kernel: audit: type=1130 audit(1755046745.636:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.76:22-139.178.68.195:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:05.993000 audit[5468]: USER_ACCT pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.020841 sshd[5468]: Accepted publickey for core from 139.178.68.195 port 55346 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:06.025623 kernel: audit: type=1101 audit(1755046745.993:487): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.030025 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:06.027000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.060645 kernel: audit: type=1103 audit(1755046746.027:488): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.068927 systemd-logind[1323]: New session 15 of user core. Aug 13 00:59:06.070707 systemd[1]: Started session-15.scope. Aug 13 00:59:06.095628 kernel: audit: type=1006 audit(1755046746.027:489): pid=5468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:59:06.027000 audit[5468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b60f9d0 a2=3 a3=0 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:06.125639 kernel: audit: type=1300 audit(1755046746.027:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b60f9d0 a2=3 a3=0 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:06.027000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:06.097000 audit[5468]: USER_START pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.178707 kernel: audit: type=1327 audit(1755046746.027:489): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:06.178910 kernel: audit: type=1105 audit(1755046746.097:490): pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.100000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.203828 kernel: audit: type=1103 audit(1755046746.100:491): pid=5471 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.456953 sshd[5468]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:06.457000 audit[5468]: USER_END pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.494642 kernel: audit: type=1106 audit(1755046746.457:492): pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.496044 systemd-logind[1323]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:59:06.498503 systemd[1]: sshd@14-10.128.0.76:22-139.178.68.195:55346.service: Deactivated successfully. Aug 13 00:59:06.500001 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:59:06.502337 systemd-logind[1323]: Removed session 15. Aug 13 00:59:06.462000 audit[5468]: CRED_DISP pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.533620 kernel: audit: type=1104 audit(1755046746.462:493): pid=5468 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.76:22-139.178.68.195:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:06.587380 systemd[1]: run-containerd-runc-k8s.io-9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9-runc.XXBhAw.mount: Deactivated successfully. Aug 13 00:59:11.510936 systemd[1]: Started sshd@15-10.128.0.76:22-139.178.68.195:55366.service. Aug 13 00:59:11.520883 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:59:11.520961 kernel: audit: type=1130 audit(1755046751.510:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.76:22-139.178.68.195:55366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:11.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.76:22-139.178.68.195:55366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:11.856000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:11.865041 sshd[5501]: Accepted publickey for core from 139.178.68.195 port 55366 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:11.887805 kernel: audit: type=1101 audit(1755046751.856:496): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:11.891067 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:11.889000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:11.920037 kernel: audit: type=1103 audit(1755046751.889:497): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:11.920190 kernel: audit: type=1006 audit(1755046751.889:498): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 00:59:11.923306 systemd-logind[1323]: New session 16 of user core. Aug 13 00:59:11.925198 systemd[1]: Started session-16.scope. Aug 13 00:59:11.889000 audit[5501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7fa87b20 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:11.976631 kernel: audit: type=1300 audit(1755046751.889:498): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7fa87b20 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:11.889000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:11.990632 kernel: audit: type=1327 audit(1755046751.889:498): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:11.947000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.024619 kernel: audit: type=1105 audit(1755046751.947:499): pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:11.978000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.049624 kernel: audit: type=1103 audit(1755046751.978:500): pid=5504 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.266382 sshd[5501]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:12.268000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.275205 systemd[1]: sshd@15-10.128.0.76:22-139.178.68.195:55366.service: Deactivated successfully. Aug 13 00:59:12.276900 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:59:12.279482 systemd-logind[1323]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:59:12.281205 systemd-logind[1323]: Removed session 16. Aug 13 00:59:12.303282 kernel: audit: type=1106 audit(1755046752.268:501): pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.268000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.329377 kernel: audit: type=1104 audit(1755046752.268:502): pid=5501 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:12.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.76:22-139.178.68.195:55366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:14.062000 audit[1533]: AVC avc: denied { associate } for pid=1533 comm="google_accounts" name="#d1" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=0 Aug 13 00:59:14.062000 audit[1533]: SYSCALL arch=c000003e syscall=83 success=no exit=-13 a0=7fa6b8807b90 a1=1ff a2=1ff a3=0 items=0 ppid=1479 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=4294967295 comm="google_accounts" exe="/usr/bin/python3.9" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:14.062000 audit: PROCTITLE proctitle=2F7573722F6C69622F707974686F6E2D657865632F707974686F6E332E392F707974686F6E33002F7573722F62696E2F676F6F676C655F6163636F756E74735F6461656D6F6E Aug 13 00:59:14.069069 google-accounts[1533]: ERROR Exception calling the response handler. [Errno 13] Permission denied: '/var/lib/google'. Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/google_compute_engine/metadata_watcher.py", line 200, in WatchMetadata handler(response) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_daemon.py", line 285, in HandleAccounts self.utils.SetConfiguredUsers(desired_users.keys()) File "/usr/lib/python3.9/site-packages/google_compute_engine/accounts/accounts_utils.py", line 324, in SetConfiguredUsers os.makedirs(self.google_users_dir) File "/usr/lib/python-exec/python3.9/../../../lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/var/lib/google' Aug 13 00:59:17.313634 systemd[1]: Started sshd@16-10.128.0.76:22-139.178.68.195:55372.service. Aug 13 00:59:17.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.76:22-139.178.68.195:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:17.319331 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:59:17.319456 kernel: audit: type=1130 audit(1755046757.313:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.76:22-139.178.68.195:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:17.676000 audit[5515]: USER_ACCT pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.702857 sshd[5515]: Accepted publickey for core from 139.178.68.195 port 55372 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:17.707689 kernel: audit: type=1101 audit(1755046757.676:506): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.711231 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:17.708000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.738017 kernel: audit: type=1103 audit(1755046757.708:507): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.745886 systemd[1]: Started session-17.scope. Aug 13 00:59:17.746259 systemd-logind[1323]: New session 17 of user core. Aug 13 00:59:17.787640 kernel: audit: type=1006 audit(1755046757.709:508): pid=5515 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 00:59:17.709000 audit[5515]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc23cec7e0 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:17.823653 kernel: audit: type=1300 audit(1755046757.709:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc23cec7e0 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:17.823856 kernel: audit: type=1327 audit(1755046757.709:508): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:17.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:17.758000 audit[5515]: USER_START pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.866664 kernel: audit: type=1105 audit(1755046757.758:509): pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.866857 kernel: audit: type=1103 audit(1755046757.762:510): pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:17.762000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.106286 sshd[5515]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:18.143634 kernel: audit: type=1106 audit(1755046758.107:511): pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.107000 audit[5515]: USER_END pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.107000 audit[5515]: CRED_DISP pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.159772 systemd[1]: Started sshd@17-10.128.0.76:22-139.178.68.195:55378.service. Aug 13 00:59:18.169149 systemd[1]: sshd@16-10.128.0.76:22-139.178.68.195:55372.service: Deactivated successfully. Aug 13 00:59:18.174638 kernel: audit: type=1104 audit(1755046758.107:512): pid=5515 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.174765 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:59:18.185271 systemd-logind[1323]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:59:18.192121 systemd-logind[1323]: Removed session 17. Aug 13 00:59:18.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.76:22-139.178.68.195:55378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:18.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.76:22-139.178.68.195:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:18.495000 audit[5526]: USER_ACCT pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.498576 sshd[5526]: Accepted publickey for core from 139.178.68.195 port 55378 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:18.497000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.497000 audit[5526]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffd65c890 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:18.497000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:18.499964 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:18.510747 systemd[1]: Started session-18.scope. Aug 13 00:59:18.511346 systemd-logind[1323]: New session 18 of user core. Aug 13 00:59:18.528000 audit[5526]: USER_START pid=5526 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:18.531000 audit[5531]: CRED_ACQ pid=5531 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.066283 sshd[5526]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:19.066000 audit[5526]: USER_END pid=5526 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.067000 audit[5526]: CRED_DISP pid=5526 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.072209 systemd-logind[1323]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:59:19.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.76:22-139.178.68.195:55378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:19.075329 systemd[1]: sshd@17-10.128.0.76:22-139.178.68.195:55378.service: Deactivated successfully. Aug 13 00:59:19.076796 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:59:19.080911 systemd-logind[1323]: Removed session 18. Aug 13 00:59:19.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.76:22-139.178.68.195:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:19.109521 systemd[1]: Started sshd@18-10.128.0.76:22-139.178.68.195:55390.service. Aug 13 00:59:19.409000 audit[5539]: USER_ACCT pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.413214 sshd[5539]: Accepted publickey for core from 139.178.68.195 port 55390 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:19.412000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.412000 audit[5539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde9f36510 a2=3 a3=0 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:19.412000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:19.414403 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:19.432789 systemd-logind[1323]: New session 19 of user core. Aug 13 00:59:19.434150 systemd[1]: Started session-19.scope. Aug 13 00:59:19.443000 audit[5539]: USER_START pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:19.446000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.141000 audit[5554]: NETFILTER_CFG table=filter:133 family=2 entries=8 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.148708 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:59:23.148900 kernel: audit: type=1325 audit(1755046763.141:529): table=filter:133 family=2 entries=8 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.168835 sshd[5539]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:23.141000 audit[5554]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd4ffd2d60 a2=0 a3=7ffd4ffd2d4c items=0 ppid=2359 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.204785 kernel: audit: type=1300 audit(1755046763.141:529): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd4ffd2d60 a2=0 a3=7ffd4ffd2d4c items=0 ppid=2359 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.217000 systemd[1]: Started sshd@19-10.128.0.76:22-139.178.68.195:34622.service. Aug 13 00:59:23.219047 systemd[1]: sshd@18-10.128.0.76:22-139.178.68.195:55390.service: Deactivated successfully. Aug 13 00:59:23.230979 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:59:23.233197 systemd-logind[1323]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:59:23.235337 systemd-logind[1323]: Removed session 19. Aug 13 00:59:23.141000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.269624 kernel: audit: type=1327 audit(1755046763.141:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.169000 audit[5554]: NETFILTER_CFG table=nat:134 family=2 entries=26 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.169000 audit[5554]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffd4ffd2d60 a2=0 a3=7ffd4ffd2d4c items=0 ppid=2359 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.321740 kernel: audit: type=1325 audit(1755046763.169:530): table=nat:134 family=2 entries=26 op=nft_register_rule pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.321930 kernel: audit: type=1300 audit(1755046763.169:530): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffd4ffd2d60 a2=0 a3=7ffd4ffd2d4c items=0 ppid=2359 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.343625 kernel: audit: type=1327 audit(1755046763.169:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.343814 kernel: audit: type=1106 audit(1755046763.207:531): pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.207000 audit[5539]: USER_END pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.207000 audit[5539]: CRED_DISP pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.421828 kernel: audit: type=1104 audit(1755046763.207:532): pid=5539 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.422015 kernel: audit: type=1130 audit(1755046763.216:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.76:22-139.178.68.195:34622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:23.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.76:22-139.178.68.195:34622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:23.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.76:22-139.178.68.195:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:23.470630 kernel: audit: type=1131 audit(1755046763.220:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.76:22-139.178.68.195:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:23.269000 audit[5560]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.269000 audit[5560]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff0db20470 a2=0 a3=7fff0db2045c items=0 ppid=2359 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.322000 audit[5560]: NETFILTER_CFG table=nat:136 family=2 entries=26 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:23.322000 audit[5560]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff0db20470 a2=0 a3=0 items=0 ppid=2359 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:23.579000 audit[5556]: USER_ACCT pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.583654 sshd[5556]: Accepted publickey for core from 139.178.68.195 port 34622 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:23.583000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.583000 audit[5556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef4470630 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:23.583000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:23.586425 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:23.597034 systemd[1]: Started session-20.scope. Aug 13 00:59:23.598326 systemd-logind[1323]: New session 20 of user core. Aug 13 00:59:23.619000 audit[5556]: USER_START pid=5556 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.623000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:23.905364 systemd[1]: run-containerd-runc-k8s.io-9728dac67616234d1fc98a6278091537e365a1128b2bd4c0c57ec2cf8aadc8a9-runc.Lugr6N.mount: Deactivated successfully. Aug 13 00:59:24.512363 sshd[5556]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:24.514000 audit[5556]: USER_END pid=5556 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:24.514000 audit[5556]: CRED_DISP pid=5556 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:24.520295 systemd[1]: sshd@19-10.128.0.76:22-139.178.68.195:34622.service: Deactivated successfully. Aug 13 00:59:24.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.76:22-139.178.68.195:34622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:24.523862 systemd-logind[1323]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:59:24.523974 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:59:24.527820 systemd-logind[1323]: Removed session 20. Aug 13 00:59:24.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.76:22-139.178.68.195:34630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:24.560232 systemd[1]: Started sshd@20-10.128.0.76:22-139.178.68.195:34630.service. Aug 13 00:59:24.881000 audit[5626]: USER_ACCT pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:24.885909 sshd[5626]: Accepted publickey for core from 139.178.68.195 port 34630 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:24.885000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:24.886000 audit[5626]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd724a36b0 a2=3 a3=0 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:24.886000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:24.889469 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:24.898469 systemd-logind[1323]: New session 21 of user core. Aug 13 00:59:24.899465 systemd[1]: Started session-21.scope. Aug 13 00:59:24.919000 audit[5626]: USER_START pid=5626 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:24.922000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:25.351136 sshd[5626]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:25.355000 audit[5626]: USER_END pid=5626 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:25.356000 audit[5626]: CRED_DISP pid=5626 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:25.362076 systemd-logind[1323]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:59:25.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.76:22-139.178.68.195:34630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:25.364788 systemd[1]: sshd@20-10.128.0.76:22-139.178.68.195:34630.service: Deactivated successfully. Aug 13 00:59:25.366566 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:59:25.370947 systemd-logind[1323]: Removed session 21. Aug 13 00:59:29.765042 kernel: kauditd_printk_skb: 27 callbacks suppressed Aug 13 00:59:29.765352 kernel: audit: type=1325 audit(1755046769.741:554): table=filter:137 family=2 entries=20 op=nft_register_rule pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:29.741000 audit[5668]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:29.741000 audit[5668]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffea9bbbb40 a2=0 a3=7ffea9bbbb2c items=0 ppid=2359 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:29.802636 kernel: audit: type=1300 audit(1755046769.741:554): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffea9bbbb40 a2=0 a3=7ffea9bbbb2c items=0 ppid=2359 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:29.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:29.819638 kernel: audit: type=1327 audit(1755046769.741:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:29.823000 audit[5668]: NETFILTER_CFG table=nat:138 family=2 entries=110 op=nft_register_chain pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:29.842633 kernel: audit: type=1325 audit(1755046769.823:555): table=nat:138 family=2 entries=110 op=nft_register_chain pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:59:29.842893 kernel: audit: type=1300 audit(1755046769.823:555): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffea9bbbb40 a2=0 a3=7ffea9bbbb2c items=0 ppid=2359 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:29.823000 audit[5668]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffea9bbbb40 a2=0 a3=7ffea9bbbb2c items=0 ppid=2359 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:29.876638 kernel: audit: type=1327 audit(1755046769.823:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:29.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:59:30.396982 systemd[1]: Started sshd@21-10.128.0.76:22-139.178.68.195:44276.service. Aug 13 00:59:30.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.76:22-139.178.68.195:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:30.425642 kernel: audit: type=1130 audit(1755046770.396:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.76:22-139.178.68.195:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:30.732000 audit[5670]: USER_ACCT pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:30.765282 sshd[5670]: Accepted publickey for core from 139.178.68.195 port 44276 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:30.766027 kernel: audit: type=1101 audit(1755046770.732:557): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:30.767322 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:30.766000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:30.782911 systemd-logind[1323]: New session 22 of user core. Aug 13 00:59:30.806082 kernel: audit: type=1103 audit(1755046770.766:558): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:30.784285 systemd[1]: Started session-22.scope. Aug 13 00:59:30.825747 kernel: audit: type=1006 audit(1755046770.766:559): pid=5670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 13 00:59:30.766000 audit[5670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe81e513a0 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:30.766000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:30.827000 audit[5670]: USER_START pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:30.830000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:31.239000 audit[5670]: USER_END pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:31.237733 sshd[5670]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:31.239000 audit[5670]: CRED_DISP pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:31.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.76:22-139.178.68.195:44276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:31.243246 systemd[1]: sshd@21-10.128.0.76:22-139.178.68.195:44276.service: Deactivated successfully. Aug 13 00:59:31.244680 systemd-logind[1323]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:59:31.246098 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:59:31.249287 systemd-logind[1323]: Removed session 22. Aug 13 00:59:36.286266 systemd[1]: Started sshd@22-10.128.0.76:22-139.178.68.195:44282.service. Aug 13 00:59:36.307692 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:59:36.307907 kernel: audit: type=1130 audit(1755046776.286:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.76:22-139.178.68.195:44282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:36.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.76:22-139.178.68.195:44282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:36.629000 audit[5692]: USER_ACCT pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.655889 sshd[5692]: Accepted publickey for core from 139.178.68.195 port 44282 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:36.661641 kernel: audit: type=1101 audit(1755046776.629:566): pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.664065 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:36.662000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.676210 systemd[1]: Started session-23.scope. Aug 13 00:59:36.677712 systemd-logind[1323]: New session 23 of user core. Aug 13 00:59:36.703558 kernel: audit: type=1103 audit(1755046776.662:567): pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.722901 kernel: audit: type=1006 audit(1755046776.662:568): pid=5692 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 00:59:36.662000 audit[5692]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4e4db420 a2=3 a3=0 items=0 ppid=1 pid=5692 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:36.753241 kernel: audit: type=1300 audit(1755046776.662:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4e4db420 a2=3 a3=0 items=0 ppid=1 pid=5692 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:36.764190 kernel: audit: type=1327 audit(1755046776.662:568): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:36.662000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:36.696000 audit[5692]: USER_START pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.798620 kernel: audit: type=1105 audit(1755046776.696:569): pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.720000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:36.826629 kernel: audit: type=1103 audit(1755046776.720:570): pid=5695 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:37.174486 sshd[5692]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:37.210214 kernel: audit: type=1106 audit(1755046777.175:571): pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:37.175000 audit[5692]: USER_END pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:37.179948 systemd[1]: sshd@22-10.128.0.76:22-139.178.68.195:44282.service: Deactivated successfully. Aug 13 00:59:37.181339 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:59:37.211494 systemd-logind[1323]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:59:37.175000 audit[5692]: CRED_DISP pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:37.222806 systemd-logind[1323]: Removed session 23. Aug 13 00:59:37.237630 kernel: audit: type=1104 audit(1755046777.175:572): pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:37.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.76:22-139.178.68.195:44282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:42.218512 systemd[1]: Started sshd@23-10.128.0.76:22-139.178.68.195:51230.service. Aug 13 00:59:42.253644 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:59:42.253817 kernel: audit: type=1130 audit(1755046782.217:574): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.76:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:42.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.76:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:42.585000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.620269 kernel: audit: type=1101 audit(1755046782.585:575): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.620759 sshd[5718]: Accepted publickey for core from 139.178.68.195 port 51230 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 00:59:42.622800 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:59:42.635320 systemd[1]: Started session-24.scope. Aug 13 00:59:42.636711 systemd-logind[1323]: New session 24 of user core. Aug 13 00:59:42.682891 kernel: audit: type=1103 audit(1755046782.620:576): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.620000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.702718 kernel: audit: type=1006 audit(1755046782.620:577): pid=5718 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 00:59:42.620000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4beaeb60 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:42.732641 kernel: audit: type=1300 audit(1755046782.620:577): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4beaeb60 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:59:42.620000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:42.776214 kernel: audit: type=1327 audit(1755046782.620:577): proctitle=737368643A20636F7265205B707269765D Aug 13 00:59:42.776490 kernel: audit: type=1105 audit(1755046782.648:578): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.648000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.652000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:42.846643 kernel: audit: type=1103 audit(1755046782.652:579): pid=5721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:43.150506 sshd[5718]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:43.150000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:43.157314 systemd[1]: sshd@23-10.128.0.76:22-139.178.68.195:51230.service: Deactivated successfully. Aug 13 00:59:43.159112 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:59:43.175574 systemd-logind[1323]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:59:43.177372 systemd-logind[1323]: Removed session 24. Aug 13 00:59:43.185636 kernel: audit: type=1106 audit(1755046783.150:580): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:43.151000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:43.256647 kernel: audit: type=1104 audit(1755046783.151:581): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:43.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.76:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'