Sep 13 00:48:59.990204 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:48:59.990233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:48:59.990251 kernel: BIOS-provided physical RAM map: Sep 13 00:48:59.990261 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:48:59.990271 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:48:59.990280 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:48:59.990293 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:48:59.990304 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:48:59.990318 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:48:59.990328 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:48:59.990338 kernel: NX (Execute Disable) protection: active Sep 13 00:48:59.990350 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:48:59.990362 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:48:59.990374 kernel: extended physical RAM map: Sep 13 00:48:59.990392 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:48:59.990405 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 13 00:48:59.990418 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 13 00:48:59.990431 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 13 00:48:59.990443 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:48:59.990453 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:48:59.990464 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:48:59.990475 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:48:59.990486 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:48:59.990497 kernel: efi: EFI v2.70 by EDK II Sep 13 00:48:59.990513 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 13 00:48:59.990525 kernel: SMBIOS 2.7 present. Sep 13 00:48:59.990538 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:48:59.990550 kernel: Hypervisor detected: KVM Sep 13 00:48:59.990563 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:48:59.990576 kernel: kvm-clock: cpu 0, msr 4e19f001, primary cpu clock Sep 13 00:48:59.990589 kernel: kvm-clock: using sched offset of 4085551277 cycles Sep 13 00:48:59.990604 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:48:59.990616 kernel: tsc: Detected 2500.006 MHz processor Sep 13 00:48:59.990631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:48:59.990644 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:48:59.990660 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:48:59.990675 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:48:59.990688 kernel: Using GB pages for direct mapping Sep 13 00:48:59.990701 kernel: Secure boot disabled Sep 13 00:48:59.990715 kernel: ACPI: Early table checksum verification disabled Sep 13 00:48:59.990733 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:48:59.990747 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:48:59.990765 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:48:59.990777 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:48:59.990789 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:48:59.990802 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:48:59.990815 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:48:59.990828 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:48:59.990839 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:48:59.990855 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:48:59.990867 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:48:59.997933 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:48:59.997948 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:48:59.997962 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:48:59.997976 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:48:59.997989 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:48:59.998003 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:48:59.998016 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:48:59.998036 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:48:59.998049 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:48:59.998062 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:48:59.998075 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:48:59.998088 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:48:59.998101 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:48:59.998114 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:48:59.998128 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:48:59.998141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:48:59.998157 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:48:59.998170 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:48:59.998184 kernel: Zone ranges: Sep 13 00:48:59.998197 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:48:59.998210 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:48:59.998223 kernel: Normal empty Sep 13 00:48:59.998237 kernel: Movable zone start for each node Sep 13 00:48:59.998250 kernel: Early memory node ranges Sep 13 00:48:59.998264 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:48:59.998280 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:48:59.998293 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:48:59.998306 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:48:59.998319 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:48:59.998332 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:48:59.998346 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:48:59.998359 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:48:59.998373 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:48:59.998386 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:48:59.998402 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:48:59.998415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:48:59.998428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:48:59.998441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:48:59.998455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:48:59.998468 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:48:59.998481 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:48:59.998494 kernel: TSC deadline timer available Sep 13 00:48:59.998507 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:48:59.998523 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:48:59.998536 kernel: Booting paravirtualized kernel on KVM Sep 13 00:48:59.998549 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:48:59.998562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:48:59.998576 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:48:59.998589 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:48:59.998602 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:48:59.998615 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 13 00:48:59.998628 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:48:59.998644 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:48:59.998657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:48:59.998670 kernel: Policy zone: DMA32 Sep 13 00:48:59.998685 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:48:59.998699 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:48:59.998713 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:48:59.998726 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:48:59.998739 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:48:59.998756 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 13 00:48:59.998769 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:48:59.998782 kernel: Kernel/User page tables isolation: enabled Sep 13 00:48:59.998795 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:48:59.998809 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:48:59.998822 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:48:59.998836 kernel: rcu: RCU event tracing is enabled. Sep 13 00:48:59.998862 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:48:59.998892 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:48:59.998907 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:48:59.998921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:48:59.998935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:48:59.998952 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:48:59.998966 kernel: random: crng init done Sep 13 00:48:59.998979 kernel: Console: colour dummy device 80x25 Sep 13 00:48:59.998993 kernel: printk: console [tty0] enabled Sep 13 00:48:59.999007 kernel: printk: console [ttyS0] enabled Sep 13 00:48:59.999021 kernel: ACPI: Core revision 20210730 Sep 13 00:48:59.999035 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:48:59.999052 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:48:59.999066 kernel: x2apic enabled Sep 13 00:48:59.999080 kernel: Switched APIC routing to physical x2apic. Sep 13 00:48:59.999095 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 13 00:48:59.999109 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Sep 13 00:48:59.999123 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:48:59.999137 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:48:59.999154 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:48:59.999167 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:48:59.999181 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:48:59.999195 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:48:59.999209 kernel: RETBleed: Vulnerable Sep 13 00:48:59.999223 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:48:59.999236 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:48:59.999250 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:48:59.999264 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:48:59.999277 kernel: active return thunk: its_return_thunk Sep 13 00:48:59.999291 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:48:59.999308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:48:59.999322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:48:59.999336 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:48:59.999350 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:48:59.999363 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:48:59.999377 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:48:59.999391 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:48:59.999405 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:48:59.999418 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:48:59.999432 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:48:59.999446 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:48:59.999462 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:48:59.999476 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:48:59.999490 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:48:59.999504 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:48:59.999517 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:48:59.999531 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:48:59.999545 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:48:59.999559 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:48:59.999572 kernel: LSM: Security Framework initializing Sep 13 00:48:59.999586 kernel: SELinux: Initializing. Sep 13 00:48:59.999600 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:48:59.999616 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:48:59.999630 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:48:59.999645 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:48:59.999659 kernel: signal: max sigframe size: 3632 Sep 13 00:48:59.999673 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:48:59.999687 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:48:59.999701 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:48:59.999714 kernel: x86: Booting SMP configuration: Sep 13 00:48:59.999728 kernel: .... node #0, CPUs: #1 Sep 13 00:48:59.999742 kernel: kvm-clock: cpu 1, msr 4e19f041, secondary cpu clock Sep 13 00:48:59.999759 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 13 00:48:59.999774 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:48:59.999789 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:48:59.999803 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:48:59.999817 kernel: smpboot: Max logical packages: 1 Sep 13 00:48:59.999831 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Sep 13 00:48:59.999845 kernel: devtmpfs: initialized Sep 13 00:48:59.999858 kernel: x86/mm: Memory block size: 128MB Sep 13 00:48:59.999884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:48:59.999899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:48:59.999913 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:48:59.999927 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:48:59.999941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:48:59.999955 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:48:59.999970 kernel: audit: type=2000 audit(1757724538.939:1): state=initialized audit_enabled=0 res=1 Sep 13 00:48:59.999983 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:48:59.999998 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:49:00.000014 kernel: cpuidle: using governor menu Sep 13 00:49:00.000029 kernel: ACPI: bus type PCI registered Sep 13 00:49:00.000043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:49:00.000057 kernel: dca service started, version 1.12.1 Sep 13 00:49:00.000071 kernel: PCI: Using configuration type 1 for base access Sep 13 00:49:00.000085 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:49:00.000099 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:49:00.000113 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:49:00.000127 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:49:00.000144 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:49:00.000158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:49:00.000171 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:49:00.000185 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:49:00.000199 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:49:00.000213 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:49:00.000227 kernel: ACPI: Interpreter enabled Sep 13 00:49:00.000241 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:49:00.000255 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:49:00.000272 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:49:00.000286 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:49:00.000300 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:49:00.000508 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:49:00.000645 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:49:00.000664 kernel: acpiphp: Slot [3] registered Sep 13 00:49:00.000679 kernel: acpiphp: Slot [4] registered Sep 13 00:49:00.000693 kernel: acpiphp: Slot [5] registered Sep 13 00:49:00.000712 kernel: acpiphp: Slot [6] registered Sep 13 00:49:00.000727 kernel: acpiphp: Slot [7] registered Sep 13 00:49:00.000741 kernel: acpiphp: Slot [8] registered Sep 13 00:49:00.000756 kernel: acpiphp: Slot [9] registered Sep 13 00:49:00.000771 kernel: acpiphp: Slot [10] registered Sep 13 00:49:00.000786 kernel: acpiphp: Slot [11] registered Sep 13 00:49:00.000801 kernel: acpiphp: Slot [12] registered Sep 13 00:49:00.000816 kernel: acpiphp: Slot [13] registered Sep 13 00:49:00.000830 kernel: acpiphp: Slot [14] registered Sep 13 00:49:00.000848 kernel: acpiphp: Slot [15] registered Sep 13 00:49:00.000863 kernel: acpiphp: Slot [16] registered Sep 13 00:49:00.000894 kernel: acpiphp: Slot [17] registered Sep 13 00:49:00.000909 kernel: acpiphp: Slot [18] registered Sep 13 00:49:00.000923 kernel: acpiphp: Slot [19] registered Sep 13 00:49:00.000938 kernel: acpiphp: Slot [20] registered Sep 13 00:49:00.000953 kernel: acpiphp: Slot [21] registered Sep 13 00:49:00.000967 kernel: acpiphp: Slot [22] registered Sep 13 00:49:00.000983 kernel: acpiphp: Slot [23] registered Sep 13 00:49:00.000997 kernel: acpiphp: Slot [24] registered Sep 13 00:49:00.001015 kernel: acpiphp: Slot [25] registered Sep 13 00:49:00.001030 kernel: acpiphp: Slot [26] registered Sep 13 00:49:00.001045 kernel: acpiphp: Slot [27] registered Sep 13 00:49:00.001060 kernel: acpiphp: Slot [28] registered Sep 13 00:49:00.001075 kernel: acpiphp: Slot [29] registered Sep 13 00:49:00.001089 kernel: acpiphp: Slot [30] registered Sep 13 00:49:00.001104 kernel: acpiphp: Slot [31] registered Sep 13 00:49:00.001119 kernel: PCI host bridge to bus 0000:00 Sep 13 00:49:00.001265 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:49:00.001396 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:49:00.001508 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:49:00.001621 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:49:00.001733 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:49:00.001845 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:49:00.002001 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:49:00.002152 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:49:00.002285 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:49:00.002410 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:49:00.002533 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:49:00.002656 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:49:00.002780 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:49:00.002918 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:49:00.003047 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:49:00.003171 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:49:00.003298 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:49:00.003419 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:49:00.003534 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:49:00.003649 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:49:00.003768 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:49:00.003909 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:49:00.004026 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:49:00.004150 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:49:00.004268 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:49:00.004284 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:49:00.004297 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:49:00.004310 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:49:00.004326 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:49:00.004340 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:49:00.004353 kernel: iommu: Default domain type: Translated Sep 13 00:49:00.004365 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:49:00.004482 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:49:00.004598 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:49:00.004716 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:49:00.004732 kernel: vgaarb: loaded Sep 13 00:49:00.004747 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:49:00.004761 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:49:00.004773 kernel: PTP clock support registered Sep 13 00:49:00.004786 kernel: Registered efivars operations Sep 13 00:49:00.004799 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:49:00.004812 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:49:00.004824 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 13 00:49:00.004837 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:49:00.004850 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:49:00.004866 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:49:00.011620 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:49:00.011637 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:49:00.011652 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:49:00.011666 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:49:00.011680 kernel: pnp: PnP ACPI init Sep 13 00:49:00.011694 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:49:00.011708 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:49:00.011722 kernel: NET: Registered PF_INET protocol family Sep 13 00:49:00.011742 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:49:00.011756 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:49:00.011769 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:49:00.011783 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:49:00.011797 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:49:00.011811 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:49:00.011825 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:49:00.011839 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:49:00.011852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:49:00.011981 kernel: NET: Registered PF_XDP protocol family Sep 13 00:49:00.012134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:49:00.012244 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:49:00.012369 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:49:00.012476 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:49:00.012587 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:49:00.012721 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:49:00.012849 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:49:00.012886 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:49:00.012900 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:49:00.012915 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 13 00:49:00.012928 kernel: clocksource: Switched to clocksource tsc Sep 13 00:49:00.012940 kernel: Initialise system trusted keyrings Sep 13 00:49:00.012953 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:49:00.012967 kernel: Key type asymmetric registered Sep 13 00:49:00.012981 kernel: Asymmetric key parser 'x509' registered Sep 13 00:49:00.012999 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:49:00.013014 kernel: io scheduler mq-deadline registered Sep 13 00:49:00.013029 kernel: io scheduler kyber registered Sep 13 00:49:00.013043 kernel: io scheduler bfq registered Sep 13 00:49:00.013057 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:49:00.013072 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:49:00.013087 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:49:00.013102 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:49:00.013117 kernel: i8042: Warning: Keylock active Sep 13 00:49:00.013131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:49:00.013149 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:49:00.013282 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:49:00.013407 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:49:00.013519 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:48:59 UTC (1757724539) Sep 13 00:49:00.013631 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:49:00.013648 kernel: intel_pstate: CPU model not supported Sep 13 00:49:00.013663 kernel: efifb: probing for efifb Sep 13 00:49:00.013681 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 13 00:49:00.013695 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:49:00.013710 kernel: efifb: scrolling: redraw Sep 13 00:49:00.013724 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:49:00.013739 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:49:00.013754 kernel: fb0: EFI VGA frame buffer device Sep 13 00:49:00.013791 kernel: pstore: Registered efi as persistent store backend Sep 13 00:49:00.013809 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:49:00.013824 kernel: Segment Routing with IPv6 Sep 13 00:49:00.013842 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:49:00.013857 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:49:00.013884 kernel: Key type dns_resolver registered Sep 13 00:49:00.013899 kernel: IPI shorthand broadcast: enabled Sep 13 00:49:00.013914 kernel: sched_clock: Marking stable (372125855, 167172966)->(605182607, -65883786) Sep 13 00:49:00.013929 kernel: registered taskstats version 1 Sep 13 00:49:00.013944 kernel: Loading compiled-in X.509 certificates Sep 13 00:49:00.013959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:49:00.013975 kernel: Key type .fscrypt registered Sep 13 00:49:00.013993 kernel: Key type fscrypt-provisioning registered Sep 13 00:49:00.014008 kernel: pstore: Using crash dump compression: deflate Sep 13 00:49:00.014023 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:49:00.014039 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:49:00.014054 kernel: ima: No architecture policies found Sep 13 00:49:00.014069 kernel: clk: Disabling unused clocks Sep 13 00:49:00.014084 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:49:00.014099 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:49:00.014114 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:49:00.014131 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:49:00.014145 kernel: Run /init as init process Sep 13 00:49:00.014160 kernel: with arguments: Sep 13 00:49:00.014175 kernel: /init Sep 13 00:49:00.014189 kernel: with environment: Sep 13 00:49:00.014220 kernel: HOME=/ Sep 13 00:49:00.014244 kernel: TERM=linux Sep 13 00:49:00.014262 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:49:00.014280 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:49:00.014301 systemd[1]: Detected virtualization amazon. Sep 13 00:49:00.014316 systemd[1]: Detected architecture x86-64. Sep 13 00:49:00.014328 systemd[1]: Running in initrd. Sep 13 00:49:00.014343 systemd[1]: No hostname configured, using default hostname. Sep 13 00:49:00.014358 systemd[1]: Hostname set to . Sep 13 00:49:00.014374 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:49:00.014388 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:49:00.014406 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:49:00.014419 systemd[1]: Reached target cryptsetup.target. Sep 13 00:49:00.014437 systemd[1]: Reached target paths.target. Sep 13 00:49:00.014454 systemd[1]: Reached target slices.target. Sep 13 00:49:00.014466 systemd[1]: Reached target swap.target. Sep 13 00:49:00.014482 systemd[1]: Reached target timers.target. Sep 13 00:49:00.014496 systemd[1]: Listening on iscsid.socket. Sep 13 00:49:00.014511 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:49:00.014527 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:49:00.014544 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:49:00.014561 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:49:00.014577 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:49:00.014593 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:49:00.014615 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:49:00.014632 systemd[1]: Reached target sockets.target. Sep 13 00:49:00.014648 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:49:00.014665 systemd[1]: Finished network-cleanup.service. Sep 13 00:49:00.014681 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:49:00.014698 systemd[1]: Starting systemd-journald.service... Sep 13 00:49:00.014715 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:49:00.014731 systemd[1]: Starting systemd-resolved.service... Sep 13 00:49:00.014748 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:49:00.014768 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:49:00.014785 kernel: audit: type=1130 audit(1757724539.998:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.014802 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:49:00.014819 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:49:00.014841 systemd-journald[185]: Journal started Sep 13 00:49:00.014941 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2c3d5b7eedec2455615fdc1ada5649) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:48:59.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.015522 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:49:00.029726 kernel: audit: type=1130 audit(1757724540.018:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.029762 systemd[1]: Started systemd-journald.service. Sep 13 00:49:00.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.036601 systemd-resolved[187]: Positive Trust Anchors: Sep 13 00:49:00.038815 kernel: audit: type=1130 audit(1757724540.028:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.036848 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:49:00.036916 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:49:00.058990 kernel: audit: type=1130 audit(1757724540.049:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.038004 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:49:00.049572 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 13 00:49:00.053307 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:49:00.065846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:49:00.085745 kernel: audit: type=1130 audit(1757724540.066:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.067195 systemd[1]: Started systemd-resolved.service. Sep 13 00:49:00.068483 systemd[1]: Reached target nss-lookup.target. Sep 13 00:49:00.089995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:49:00.100307 kernel: audit: type=1130 audit(1757724540.091:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.100378 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:49:00.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.104203 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:49:00.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.114948 kernel: audit: type=1130 audit(1757724540.104:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.115099 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:49:00.119898 kernel: Bridge firewalling registered Sep 13 00:49:00.119991 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:49:00.129166 dracut-cmdline[202]: dracut-dracut-053 Sep 13 00:49:00.133963 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:49:00.150907 kernel: SCSI subsystem initialized Sep 13 00:49:00.172071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:49:00.172154 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:49:00.173336 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:49:00.179558 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 13 00:49:00.180550 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:49:00.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.190297 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:49:00.192649 kernel: audit: type=1130 audit(1757724540.181:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.202003 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:49:00.211778 kernel: audit: type=1130 audit(1757724540.201:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.232901 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:49:00.253895 kernel: iscsi: registered transport (tcp) Sep 13 00:49:00.290074 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:49:00.290160 kernel: QLogic iSCSI HBA Driver Sep 13 00:49:00.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.332584 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:49:00.335383 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:49:00.389940 kernel: raid6: avx512x4 gen() 17264 MB/s Sep 13 00:49:00.407908 kernel: raid6: avx512x4 xor() 7538 MB/s Sep 13 00:49:00.425933 kernel: raid6: avx512x2 gen() 17286 MB/s Sep 13 00:49:00.443933 kernel: raid6: avx512x2 xor() 22926 MB/s Sep 13 00:49:00.461932 kernel: raid6: avx512x1 gen() 17455 MB/s Sep 13 00:49:00.479926 kernel: raid6: avx512x1 xor() 21924 MB/s Sep 13 00:49:00.497929 kernel: raid6: avx2x4 gen() 17453 MB/s Sep 13 00:49:00.515932 kernel: raid6: avx2x4 xor() 7085 MB/s Sep 13 00:49:00.533933 kernel: raid6: avx2x2 gen() 17210 MB/s Sep 13 00:49:00.551927 kernel: raid6: avx2x2 xor() 18106 MB/s Sep 13 00:49:00.569929 kernel: raid6: avx2x1 gen() 13633 MB/s Sep 13 00:49:00.587923 kernel: raid6: avx2x1 xor() 15838 MB/s Sep 13 00:49:00.605902 kernel: raid6: sse2x4 gen() 9581 MB/s Sep 13 00:49:00.623940 kernel: raid6: sse2x4 xor() 5817 MB/s Sep 13 00:49:00.641917 kernel: raid6: sse2x2 gen() 10520 MB/s Sep 13 00:49:00.659927 kernel: raid6: sse2x2 xor() 6131 MB/s Sep 13 00:49:00.677925 kernel: raid6: sse2x1 gen() 9528 MB/s Sep 13 00:49:00.696184 kernel: raid6: sse2x1 xor() 4850 MB/s Sep 13 00:49:00.696242 kernel: raid6: using algorithm avx512x1 gen() 17455 MB/s Sep 13 00:49:00.696262 kernel: raid6: .... xor() 21924 MB/s, rmw enabled Sep 13 00:49:00.697282 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:49:00.712905 kernel: xor: automatically using best checksumming function avx Sep 13 00:49:00.815906 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:49:00.825169 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:49:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.824000 audit: BPF prog-id=7 op=LOAD Sep 13 00:49:00.824000 audit: BPF prog-id=8 op=LOAD Sep 13 00:49:00.826750 systemd[1]: Starting systemd-udevd.service... Sep 13 00:49:00.840239 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 13 00:49:00.845665 systemd[1]: Started systemd-udevd.service. Sep 13 00:49:00.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.848250 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:49:00.868232 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Sep 13 00:49:00.901469 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:49:00.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.902929 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:49:00.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:00.946707 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:49:01.001891 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:49:01.030902 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:49:01.032894 kernel: AES CTR mode by8 optimization enabled Sep 13 00:49:01.046751 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:49:01.059420 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:49:01.059595 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:49:01.059750 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:49:01.059953 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:49:01.059975 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:17:7c:1e:df:5d Sep 13 00:49:01.067896 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:49:01.074839 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:49:01.074927 kernel: GPT:9289727 != 16777215 Sep 13 00:49:01.074948 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:49:01.077271 kernel: GPT:9289727 != 16777215 Sep 13 00:49:01.077339 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:49:01.079898 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:01.084079 (udev-worker)[431]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:49:01.145903 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (433) Sep 13 00:49:01.209311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:49:01.224178 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:49:01.234970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:49:01.246057 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:49:01.246809 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:49:01.254367 systemd[1]: Starting disk-uuid.service... Sep 13 00:49:01.261909 disk-uuid[594]: Primary Header is updated. Sep 13 00:49:01.261909 disk-uuid[594]: Secondary Entries is updated. Sep 13 00:49:01.261909 disk-uuid[594]: Secondary Header is updated. Sep 13 00:49:01.282921 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:01.309904 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:02.329055 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:02.329453 disk-uuid[595]: The operation has completed successfully. Sep 13 00:49:02.673328 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:49:02.677768 systemd[1]: Finished disk-uuid.service. Sep 13 00:49:02.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:02.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:02.701068 systemd[1]: Starting verity-setup.service... Sep 13 00:49:02.769911 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:49:02.919283 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:49:02.923572 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:49:02.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:02.932230 systemd[1]: Finished verity-setup.service. Sep 13 00:49:03.209158 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:49:03.209996 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:49:03.211040 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:49:03.212116 systemd[1]: Starting ignition-setup.service... Sep 13 00:49:03.217125 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:49:03.247440 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:49:03.247519 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:49:03.247541 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:49:03.281899 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:49:03.299320 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:49:03.300365 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:49:03.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.300000 audit: BPF prog-id=9 op=LOAD Sep 13 00:49:03.302809 systemd[1]: Starting systemd-networkd.service... Sep 13 00:49:03.312275 systemd[1]: Finished ignition-setup.service. Sep 13 00:49:03.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.315892 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:49:03.343258 systemd-networkd[1022]: lo: Link UP Sep 13 00:49:03.343270 systemd-networkd[1022]: lo: Gained carrier Sep 13 00:49:03.344056 systemd-networkd[1022]: Enumeration completed Sep 13 00:49:03.344188 systemd[1]: Started systemd-networkd.service. Sep 13 00:49:03.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.344498 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:49:03.356431 systemd-networkd[1022]: eth0: Link UP Sep 13 00:49:03.356438 systemd-networkd[1022]: eth0: Gained carrier Sep 13 00:49:03.363741 systemd[1]: Reached target network.target. Sep 13 00:49:03.367582 systemd[1]: Starting iscsiuio.service... Sep 13 00:49:03.371316 systemd-networkd[1022]: eth0: DHCPv4 address 172.31.30.243/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:49:03.378897 systemd[1]: Started iscsiuio.service. Sep 13 00:49:03.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.382865 systemd[1]: Starting iscsid.service... Sep 13 00:49:03.389188 iscsid[1029]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:49:03.389188 iscsid[1029]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:49:03.389188 iscsid[1029]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:49:03.389188 iscsid[1029]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:49:03.389188 iscsid[1029]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:49:03.389188 iscsid[1029]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:49:03.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.392213 systemd[1]: Started iscsid.service. Sep 13 00:49:03.396028 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:49:03.425168 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:49:03.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.426133 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:49:03.427281 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:49:03.428594 systemd[1]: Reached target remote-fs.target. Sep 13 00:49:03.431290 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:49:03.461714 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:49:03.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.840467 ignition[1024]: Ignition 2.14.0 Sep 13 00:49:03.840484 ignition[1024]: Stage: fetch-offline Sep 13 00:49:03.840636 ignition[1024]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:03.840680 ignition[1024]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:03.859643 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:03.860335 ignition[1024]: Ignition finished successfully Sep 13 00:49:03.861810 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:49:03.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.864002 systemd[1]: Starting ignition-fetch.service... Sep 13 00:49:03.875461 ignition[1048]: Ignition 2.14.0 Sep 13 00:49:03.875475 ignition[1048]: Stage: fetch Sep 13 00:49:03.875690 ignition[1048]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:03.875724 ignition[1048]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:03.883927 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:03.885127 ignition[1048]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:03.891768 ignition[1048]: INFO : PUT result: OK Sep 13 00:49:03.900788 ignition[1048]: DEBUG : parsed url from cmdline: "" Sep 13 00:49:03.900788 ignition[1048]: INFO : no config URL provided Sep 13 00:49:03.900788 ignition[1048]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:49:03.905356 ignition[1048]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:49:03.905356 ignition[1048]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:03.905356 ignition[1048]: INFO : PUT result: OK Sep 13 00:49:03.905356 ignition[1048]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:49:03.905356 ignition[1048]: INFO : GET result: OK Sep 13 00:49:03.905356 ignition[1048]: DEBUG : parsing config with SHA512: fbc10ee03127a41d368dbf83e0beaf6311c6b64eab3c362527e622b30efca41af82adb030a6acfd5b7f643a037b8e019b8c01f7eee6f514bc4c7d2a8e5354d09 Sep 13 00:49:03.911910 unknown[1048]: fetched base config from "system" Sep 13 00:49:03.911927 unknown[1048]: fetched base config from "system" Sep 13 00:49:03.914678 ignition[1048]: fetch: fetch complete Sep 13 00:49:03.911947 unknown[1048]: fetched user config from "aws" Sep 13 00:49:03.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.914687 ignition[1048]: fetch: fetch passed Sep 13 00:49:03.916433 systemd[1]: Finished ignition-fetch.service. Sep 13 00:49:03.914755 ignition[1048]: Ignition finished successfully Sep 13 00:49:03.918834 systemd[1]: Starting ignition-kargs.service... Sep 13 00:49:03.931108 ignition[1054]: Ignition 2.14.0 Sep 13 00:49:03.931121 ignition[1054]: Stage: kargs Sep 13 00:49:03.931325 ignition[1054]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:03.931359 ignition[1054]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:03.938987 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:03.940051 ignition[1054]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:03.940920 ignition[1054]: INFO : PUT result: OK Sep 13 00:49:03.943961 ignition[1054]: kargs: kargs passed Sep 13 00:49:03.944026 ignition[1054]: Ignition finished successfully Sep 13 00:49:03.946046 systemd[1]: Finished ignition-kargs.service. Sep 13 00:49:03.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.948294 systemd[1]: Starting ignition-disks.service... Sep 13 00:49:03.957637 ignition[1060]: Ignition 2.14.0 Sep 13 00:49:03.957649 ignition[1060]: Stage: disks Sep 13 00:49:03.957862 ignition[1060]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:03.957915 ignition[1060]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:03.965492 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:03.966388 ignition[1060]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:03.967109 ignition[1060]: INFO : PUT result: OK Sep 13 00:49:03.970039 ignition[1060]: disks: disks passed Sep 13 00:49:03.970102 ignition[1060]: Ignition finished successfully Sep 13 00:49:03.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:03.971173 systemd[1]: Finished ignition-disks.service. Sep 13 00:49:03.972062 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:49:03.972544 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:49:03.972992 systemd[1]: Reached target local-fs.target. Sep 13 00:49:03.973732 systemd[1]: Reached target sysinit.target. Sep 13 00:49:03.974750 systemd[1]: Reached target basic.target. Sep 13 00:49:03.976609 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:49:04.014541 systemd-fsck[1068]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:49:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.017587 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:49:04.020309 systemd[1]: Mounting sysroot.mount... Sep 13 00:49:04.047617 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:49:04.046684 systemd[1]: Mounted sysroot.mount. Sep 13 00:49:04.048916 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:49:04.054551 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:49:04.057460 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:49:04.059312 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:49:04.059597 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:49:04.062706 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:49:04.083417 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:49:04.087264 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:49:04.100648 initrd-setup-root[1090]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:49:04.107971 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1085) Sep 13 00:49:04.113240 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:49:04.113340 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:49:04.113363 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:49:04.119736 initrd-setup-root[1114]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:49:04.137900 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:49:04.138147 initrd-setup-root[1124]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:49:04.153674 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:49:04.162360 initrd-setup-root[1132]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:49:04.334251 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:49:04.343578 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:49:04.343645 kernel: audit: type=1130 audit(1757724544.333:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.337849 systemd[1]: Starting ignition-mount.service... Sep 13 00:49:04.346704 systemd[1]: Starting sysroot-boot.service... Sep 13 00:49:04.353258 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:49:04.353488 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:49:04.366788 ignition[1150]: INFO : Ignition 2.14.0 Sep 13 00:49:04.367760 ignition[1150]: INFO : Stage: mount Sep 13 00:49:04.369041 ignition[1150]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:04.371221 ignition[1150]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:04.384418 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:04.385952 ignition[1150]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:04.387760 ignition[1150]: INFO : PUT result: OK Sep 13 00:49:04.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.392785 ignition[1150]: INFO : mount: mount passed Sep 13 00:49:04.392785 ignition[1150]: INFO : Ignition finished successfully Sep 13 00:49:04.398693 kernel: audit: type=1130 audit(1757724544.390:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.391906 systemd[1]: Finished sysroot-boot.service. Sep 13 00:49:04.399632 systemd[1]: Finished ignition-mount.service. Sep 13 00:49:04.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.401689 systemd[1]: Starting ignition-files.service... Sep 13 00:49:04.410134 kernel: audit: type=1130 audit(1757724544.398:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:04.413583 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:49:04.435934 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1160) Sep 13 00:49:04.444707 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:49:04.444783 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:49:04.444804 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:49:04.505911 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:49:04.510102 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:49:04.547465 ignition[1179]: INFO : Ignition 2.14.0 Sep 13 00:49:04.547465 ignition[1179]: INFO : Stage: files Sep 13 00:49:04.550055 ignition[1179]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:04.550055 ignition[1179]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:04.559215 ignition[1179]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:04.560230 ignition[1179]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:04.560230 ignition[1179]: INFO : PUT result: OK Sep 13 00:49:04.564076 ignition[1179]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:49:04.576387 ignition[1179]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:49:04.576387 ignition[1179]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:49:04.588811 ignition[1179]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:49:04.595106 ignition[1179]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:49:04.595106 ignition[1179]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:49:04.590675 unknown[1179]: wrote ssh authorized keys file for user: core Sep 13 00:49:04.598992 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:49:04.598992 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:49:04.598992 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:49:04.598992 ignition[1179]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:49:04.718739 ignition[1179]: INFO : GET result: OK Sep 13 00:49:04.957188 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:49:04.959811 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:49:04.959811 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:49:04.959811 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:04.959811 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:04.959811 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:49:04.959811 ignition[1179]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:04.973827 ignition[1179]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem110853721" Sep 13 00:49:04.973827 ignition[1179]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem110853721": device or resource busy Sep 13 00:49:04.973827 ignition[1179]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem110853721", trying btrfs: device or resource busy Sep 13 00:49:04.973827 ignition[1179]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem110853721" Sep 13 00:49:04.973827 ignition[1179]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem110853721" Sep 13 00:49:04.983743 ignition[1179]: INFO : op(3): [started] unmounting "/mnt/oem110853721" Sep 13 00:49:04.984921 ignition[1179]: INFO : op(3): [finished] unmounting "/mnt/oem110853721" Sep 13 00:49:04.984921 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:49:04.984921 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:49:04.984921 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:49:04.984921 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:49:04.984921 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:49:05.000247 ignition[1179]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:05.000247 ignition[1179]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178452239" Sep 13 00:49:05.000247 ignition[1179]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178452239": device or resource busy Sep 13 00:49:05.000247 ignition[1179]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4178452239", trying btrfs: device or resource busy Sep 13 00:49:05.000247 ignition[1179]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178452239" Sep 13 00:49:05.000247 ignition[1179]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178452239" Sep 13 00:49:05.000247 ignition[1179]: INFO : op(6): [started] unmounting "/mnt/oem4178452239" Sep 13 00:49:05.000247 ignition[1179]: INFO : op(6): [finished] unmounting "/mnt/oem4178452239" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:49:05.000247 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:05.000247 ignition[1179]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:49:05.109082 systemd-networkd[1022]: eth0: Gained IPv6LL Sep 13 00:49:05.336477 ignition[1179]: INFO : GET result: OK Sep 13 00:49:05.861765 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:05.867175 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:49:05.867175 ignition[1179]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:05.874466 ignition[1179]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001258114" Sep 13 00:49:05.874466 ignition[1179]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001258114": device or resource busy Sep 13 00:49:05.874466 ignition[1179]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3001258114", trying btrfs: device or resource busy Sep 13 00:49:05.874466 ignition[1179]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001258114" Sep 13 00:49:05.874466 ignition[1179]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001258114" Sep 13 00:49:05.874466 ignition[1179]: INFO : op(9): [started] unmounting "/mnt/oem3001258114" Sep 13 00:49:05.889229 ignition[1179]: INFO : op(9): [finished] unmounting "/mnt/oem3001258114" Sep 13 00:49:05.889229 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:49:05.889229 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:49:05.889229 ignition[1179]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:05.884745 systemd[1]: mnt-oem3001258114.mount: Deactivated successfully. Sep 13 00:49:05.904554 ignition[1179]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1191148974" Sep 13 00:49:05.904554 ignition[1179]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1191148974": device or resource busy Sep 13 00:49:05.904554 ignition[1179]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1191148974", trying btrfs: device or resource busy Sep 13 00:49:05.904554 ignition[1179]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1191148974" Sep 13 00:49:05.918331 ignition[1179]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1191148974" Sep 13 00:49:05.918331 ignition[1179]: INFO : op(c): [started] unmounting "/mnt/oem1191148974" Sep 13 00:49:05.918331 ignition[1179]: INFO : op(c): [finished] unmounting "/mnt/oem1191148974" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(13): [started] processing unit "nvidia.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(13): [finished] processing unit "nvidia.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(14): [started] processing unit "containerd.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(14): [finished] processing unit "containerd.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:49:05.918331 ignition[1179]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 00:49:05.993426 kernel: audit: type=1130 audit(1757724545.944:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.993464 kernel: audit: type=1130 audit(1757724545.967:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.993485 kernel: audit: type=1130 audit(1757724545.975:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.993504 kernel: audit: type=1131 audit(1757724545.975:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.913733 systemd[1]: mnt-oem1191148974.mount: Deactivated successfully. Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:49:05.995008 ignition[1179]: INFO : files: files passed Sep 13 00:49:05.995008 ignition[1179]: INFO : Ignition finished successfully Sep 13 00:49:06.032786 kernel: audit: type=1130 audit(1757724546.012:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.032823 kernel: audit: type=1131 audit(1757724546.012:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:05.944035 systemd[1]: Finished ignition-files.service. Sep 13 00:49:05.952308 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:49:06.035885 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:49:05.958405 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:49:05.961390 systemd[1]: Starting ignition-quench.service... Sep 13 00:49:05.967364 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:49:05.969643 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:49:05.969766 systemd[1]: Finished ignition-quench.service. Sep 13 00:49:05.977599 systemd[1]: Reached target ignition-complete.target. Sep 13 00:49:05.990406 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:49:06.012858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:49:06.058046 kernel: audit: type=1130 audit(1757724546.047:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.013004 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:49:06.014672 systemd[1]: Reached target initrd-fs.target. Sep 13 00:49:06.027235 systemd[1]: Reached target initrd.target. Sep 13 00:49:06.029466 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:49:06.030814 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:49:06.047954 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:49:06.050491 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:49:06.068403 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:49:06.069240 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:49:06.070558 systemd[1]: Stopped target timers.target. Sep 13 00:49:06.071710 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:49:06.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.071950 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:49:06.073154 systemd[1]: Stopped target initrd.target. Sep 13 00:49:06.074357 systemd[1]: Stopped target basic.target. Sep 13 00:49:06.075520 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:49:06.076682 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:49:06.077923 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:49:06.079033 systemd[1]: Stopped target remote-fs.target. Sep 13 00:49:06.080140 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:49:06.081405 systemd[1]: Stopped target sysinit.target. Sep 13 00:49:06.082504 systemd[1]: Stopped target local-fs.target. Sep 13 00:49:06.083613 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:49:06.084702 systemd[1]: Stopped target swap.target. Sep 13 00:49:06.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.085845 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:49:06.086074 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:49:06.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.087202 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:49:06.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.088204 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:49:06.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.088399 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:49:06.089630 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:49:06.100062 iscsid[1029]: iscsid shutting down. Sep 13 00:49:06.089836 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:49:06.090897 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:49:06.091085 systemd[1]: Stopped ignition-files.service. Sep 13 00:49:06.093488 systemd[1]: Stopping ignition-mount.service... Sep 13 00:49:06.095022 systemd[1]: Stopping iscsid.service... Sep 13 00:49:06.106352 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:49:06.108251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:49:06.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.109628 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:49:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.111857 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:49:06.112115 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:49:06.120069 ignition[1217]: INFO : Ignition 2.14.0 Sep 13 00:49:06.120069 ignition[1217]: INFO : Stage: umount Sep 13 00:49:06.120069 ignition[1217]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:06.120069 ignition[1217]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:06.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.119816 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:49:06.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.121233 systemd[1]: Stopped iscsid.service. Sep 13 00:49:06.125016 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:49:06.125965 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:49:06.144692 ignition[1217]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:06.144692 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:06.144692 ignition[1217]: INFO : PUT result: OK Sep 13 00:49:06.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.131215 systemd[1]: Stopping iscsiuio.service... Sep 13 00:49:06.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.153397 ignition[1217]: INFO : umount: umount passed Sep 13 00:49:06.153397 ignition[1217]: INFO : Ignition finished successfully Sep 13 00:49:06.136152 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:49:06.136293 systemd[1]: Stopped iscsiuio.service. Sep 13 00:49:06.143334 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:49:06.148495 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:49:06.148590 systemd[1]: Stopped ignition-mount.service. Sep 13 00:49:06.149726 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:49:06.149773 systemd[1]: Stopped ignition-disks.service. Sep 13 00:49:06.150372 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:49:06.150411 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:49:06.150999 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:49:06.151033 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:49:06.151542 systemd[1]: Stopped target network.target. Sep 13 00:49:06.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.152091 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:49:06.152180 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:49:06.152764 systemd[1]: Stopped target paths.target. Sep 13 00:49:06.153850 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:49:06.157935 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:49:06.158571 systemd[1]: Stopped target slices.target. Sep 13 00:49:06.159523 systemd[1]: Stopped target sockets.target. Sep 13 00:49:06.160560 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:49:06.160620 systemd[1]: Closed iscsid.socket. Sep 13 00:49:06.162331 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:49:06.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.162384 systemd[1]: Closed iscsiuio.socket. Sep 13 00:49:06.163382 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:49:06.163449 systemd[1]: Stopped ignition-setup.service. Sep 13 00:49:06.164519 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:49:06.165922 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:49:06.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.169949 systemd-networkd[1022]: eth0: DHCPv6 lease lost Sep 13 00:49:06.178000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:49:06.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.171230 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:49:06.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.171373 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:49:06.172606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:49:06.172653 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:49:06.174766 systemd[1]: Stopping network-cleanup.service... Sep 13 00:49:06.178118 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:49:06.178188 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:49:06.179301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:49:06.179388 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:49:06.180481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:49:06.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.180538 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:49:06.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.187529 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:49:06.196000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:49:06.190366 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:49:06.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.191116 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:49:06.191248 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:49:06.195245 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:49:06.195420 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:49:06.198004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:49:06.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.198058 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:49:06.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.201561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:49:06.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.201599 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:49:06.202125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:49:06.202171 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:49:06.202714 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:49:06.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.202758 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:49:06.203286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:49:06.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.203332 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:49:06.204842 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:49:06.209412 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:49:06.209505 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:49:06.213533 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:49:06.213597 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:49:06.214917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:49:06.214967 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:49:06.217513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:49:06.219780 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:49:06.219897 systemd[1]: Stopped network-cleanup.service. Sep 13 00:49:06.221221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:49:06.221360 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:49:06.285437 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:49:06.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.285560 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:49:06.286318 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:49:06.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:06.287271 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:49:06.287329 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:49:06.289196 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:49:06.307265 systemd[1]: Switching root. Sep 13 00:49:06.309000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:49:06.309000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:49:06.309000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:49:06.309000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:49:06.309000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:49:06.329269 systemd-journald[185]: Journal stopped Sep 13 00:49:11.358254 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 13 00:49:11.358354 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:49:11.358386 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:49:11.358407 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:49:11.358427 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:49:11.358450 kernel: SELinux: policy capability open_perms=1 Sep 13 00:49:11.358469 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:49:11.358488 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:49:11.358513 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:49:11.358532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:49:11.358556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:49:11.358575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:49:11.358602 systemd[1]: Successfully loaded SELinux policy in 70.608ms. Sep 13 00:49:11.358630 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.538ms. Sep 13 00:49:11.358656 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:49:11.358682 systemd[1]: Detected virtualization amazon. Sep 13 00:49:11.358702 systemd[1]: Detected architecture x86-64. Sep 13 00:49:11.358722 systemd[1]: Detected first boot. Sep 13 00:49:11.358743 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:49:11.358764 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:49:11.358783 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:49:11.358808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:11.358830 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:11.358853 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:11.358892 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:49:11.358915 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:49:11.358936 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:49:11.358957 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:49:11.358977 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:49:11.358998 systemd[1]: Created slice system-getty.slice. Sep 13 00:49:11.359018 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:49:11.359036 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:49:11.359056 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:49:11.359074 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:49:11.359092 systemd[1]: Created slice user.slice. Sep 13 00:49:11.359110 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:49:11.359128 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:49:11.359146 systemd[1]: Set up automount boot.automount. Sep 13 00:49:11.359170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:49:11.359198 systemd[1]: Reached target integritysetup.target. Sep 13 00:49:11.359220 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:49:11.359241 systemd[1]: Reached target remote-fs.target. Sep 13 00:49:11.359260 systemd[1]: Reached target slices.target. Sep 13 00:49:11.359279 systemd[1]: Reached target swap.target. Sep 13 00:49:11.359298 systemd[1]: Reached target torcx.target. Sep 13 00:49:11.359318 systemd[1]: Reached target veritysetup.target. Sep 13 00:49:11.359336 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:49:11.359354 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:49:11.359372 kernel: kauditd_printk_skb: 52 callbacks suppressed Sep 13 00:49:11.359392 kernel: audit: type=1400 audit(1757724551.132:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:49:11.359414 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:49:11.359433 kernel: audit: type=1335 audit(1757724551.132:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:49:11.359451 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:49:11.359470 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:49:11.359489 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:49:11.359508 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:49:11.359526 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:49:11.359545 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:49:11.359567 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:49:11.359587 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:49:11.359606 systemd[1]: Mounting media.mount... Sep 13 00:49:11.359625 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:11.359644 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:49:11.359663 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:49:11.359681 systemd[1]: Mounting tmp.mount... Sep 13 00:49:11.359701 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:49:11.359721 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:11.359743 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:49:11.359761 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:49:11.359779 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:11.359798 systemd[1]: Starting modprobe@drm.service... Sep 13 00:49:11.359816 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:11.359834 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:49:11.359853 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:11.364911 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:49:11.364958 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:49:11.364985 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:49:11.365006 systemd[1]: Starting systemd-journald.service... Sep 13 00:49:11.365025 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:49:11.365044 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:49:11.365064 kernel: loop: module loaded Sep 13 00:49:11.365083 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:49:11.365103 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:49:11.365121 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:11.365141 kernel: audit: type=1305 audit(1757724551.327:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:49:11.365162 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:49:11.365182 kernel: audit: type=1300 audit(1757724551.327:91): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe9ee4ed00 a2=4000 a3=7ffe9ee4ed9c items=0 ppid=1 pid=1373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:11.365201 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:49:11.365231 systemd[1]: Mounted media.mount. Sep 13 00:49:11.365250 kernel: audit: type=1327 audit(1757724551.327:91): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:49:11.365267 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:49:11.365286 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:49:11.365312 systemd-journald[1373]: Journal started Sep 13 00:49:11.365395 systemd-journald[1373]: Runtime Journal (/run/log/journal/ec2c3d5b7eedec2455615fdc1ada5649) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:49:11.132000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:49:11.327000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:49:11.327000 audit[1373]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe9ee4ed00 a2=4000 a3=7ffe9ee4ed9c items=0 ppid=1 pid=1373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:11.327000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:49:11.374481 systemd[1]: Started systemd-journald.service. Sep 13 00:49:11.384796 kernel: audit: type=1130 audit(1757724551.374:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.380456 systemd[1]: Mounted tmp.mount. Sep 13 00:49:11.386519 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:49:11.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.388095 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:49:11.388374 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:49:11.404945 kernel: audit: type=1130 audit(1757724551.385:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.405044 kernel: fuse: init (API version 7.34) Sep 13 00:49:11.405067 kernel: audit: type=1130 audit(1757724551.396:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.398793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:11.399055 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:11.406665 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:49:11.406937 systemd[1]: Finished modprobe@drm.service. Sep 13 00:49:11.408820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:11.409064 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:11.410552 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:11.410768 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:11.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.411904 kernel: audit: type=1131 audit(1757724551.396:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.411942 kernel: audit: type=1130 audit(1757724551.404:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.429552 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:49:11.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.430127 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:49:11.431805 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:49:11.433425 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:49:11.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.439354 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:49:11.441244 systemd[1]: Reached target network-pre.target. Sep 13 00:49:11.444251 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:49:11.449517 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:49:11.453580 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:49:11.466358 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:49:11.469237 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:49:11.470388 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:11.474126 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:49:11.475237 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:11.476959 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:49:11.485774 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:49:11.486972 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:49:11.499057 systemd-journald[1373]: Time spent on flushing to /var/log/journal/ec2c3d5b7eedec2455615fdc1ada5649 is 61.812ms for 1166 entries. Sep 13 00:49:11.499057 systemd-journald[1373]: System Journal (/var/log/journal/ec2c3d5b7eedec2455615fdc1ada5649) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:49:11.590833 systemd-journald[1373]: Received client request to flush runtime journal. Sep 13 00:49:11.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.504751 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:49:11.505961 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:49:11.507547 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:49:11.591776 udevadm[1415]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:49:11.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.510532 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:49:11.535936 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:49:11.538618 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:49:11.551374 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:49:11.593617 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:49:11.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.643560 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:49:11.646259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:49:11.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:11.713960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:49:12.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.131619 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:49:12.133379 systemd[1]: Starting systemd-udevd.service... Sep 13 00:49:12.154450 systemd-udevd[1425]: Using default interface naming scheme 'v252'. Sep 13 00:49:12.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.201296 systemd[1]: Started systemd-udevd.service. Sep 13 00:49:12.203470 systemd[1]: Starting systemd-networkd.service... Sep 13 00:49:12.212575 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:49:12.247994 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:49:12.256204 (udev-worker)[1427]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:49:12.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.264341 systemd[1]: Started systemd-userdbd.service. Sep 13 00:49:12.281885 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:49:12.290896 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:49:12.303949 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:49:12.305924 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:49:12.317000 audit[1434]: AVC avc: denied { confidentiality } for pid=1434 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:49:12.317000 audit[1434]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=565530a547c0 a1=338ec a2=7fb6dfbdabc5 a3=5 items=110 ppid=1425 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:12.317000 audit: CWD cwd="/" Sep 13 00:49:12.317000 audit: PATH item=0 name=(null) inode=29 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=1 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=2 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=3 name=(null) inode=13710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=4 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=5 name=(null) inode=13711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=6 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=7 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=8 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=9 name=(null) inode=13713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=10 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=11 name=(null) inode=13714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=12 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=13 name=(null) inode=13715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=14 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=15 name=(null) inode=13716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=16 name=(null) inode=13712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=17 name=(null) inode=13717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=18 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=19 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=20 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=21 name=(null) inode=13719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=22 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=23 name=(null) inode=13720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=24 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=25 name=(null) inode=13721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=26 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=27 name=(null) inode=13722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=28 name=(null) inode=13718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=29 name=(null) inode=13723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=30 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=31 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=32 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=33 name=(null) inode=13725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=34 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=35 name=(null) inode=13726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=36 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=37 name=(null) inode=13727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=38 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=39 name=(null) inode=13728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=40 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=41 name=(null) inode=13729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=42 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=43 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=44 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=45 name=(null) inode=13731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=46 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=47 name=(null) inode=13732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=48 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=49 name=(null) inode=13733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=50 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=51 name=(null) inode=13734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=52 name=(null) inode=13730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=53 name=(null) inode=13735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=54 name=(null) inode=29 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=55 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=56 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=57 name=(null) inode=13737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=58 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=59 name=(null) inode=13738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=60 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=61 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=62 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=63 name=(null) inode=13740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=64 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=65 name=(null) inode=13741 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=66 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=67 name=(null) inode=13742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=68 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.361890 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:49:12.317000 audit: PATH item=69 name=(null) inode=13743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=70 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=71 name=(null) inode=13744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=72 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=73 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=74 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=75 name=(null) inode=13746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=76 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=77 name=(null) inode=13747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=78 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=79 name=(null) inode=13748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=80 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=81 name=(null) inode=13749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=82 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=83 name=(null) inode=13750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=84 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=85 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=86 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=87 name=(null) inode=13752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=88 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=89 name=(null) inode=13753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=90 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=91 name=(null) inode=13754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=92 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=93 name=(null) inode=13755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=94 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=95 name=(null) inode=13756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=96 name=(null) inode=13736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=97 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=98 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=99 name=(null) inode=13758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=100 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=101 name=(null) inode=13759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=102 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=103 name=(null) inode=13760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=104 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=105 name=(null) inode=13761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=106 name=(null) inode=13757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=107 name=(null) inode=13762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PATH item=109 name=(null) inode=13794 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:12.317000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:49:12.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.367062 systemd-networkd[1433]: lo: Link UP Sep 13 00:49:12.367071 systemd-networkd[1433]: lo: Gained carrier Sep 13 00:49:12.367495 systemd-networkd[1433]: Enumeration completed Sep 13 00:49:12.367620 systemd[1]: Started systemd-networkd.service. Sep 13 00:49:12.369433 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:49:12.372008 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:49:12.380486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:49:12.380163 systemd-networkd[1433]: eth0: Link UP Sep 13 00:49:12.380295 systemd-networkd[1433]: eth0: Gained carrier Sep 13 00:49:12.391198 systemd-networkd[1433]: eth0: DHCPv4 address 172.31.30.243/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:49:12.397912 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:49:12.406054 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:49:12.528852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:49:12.530629 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:49:12.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.538033 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:49:12.580014 lvm[1540]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:49:12.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.608168 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:49:12.608837 systemd[1]: Reached target cryptsetup.target. Sep 13 00:49:12.610635 systemd[1]: Starting lvm2-activation.service... Sep 13 00:49:12.616293 lvm[1542]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:49:12.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.648313 systemd[1]: Finished lvm2-activation.service. Sep 13 00:49:12.648952 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:49:12.649565 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:49:12.649586 systemd[1]: Reached target local-fs.target. Sep 13 00:49:12.650040 systemd[1]: Reached target machines.target. Sep 13 00:49:12.651733 systemd[1]: Starting ldconfig.service... Sep 13 00:49:12.653719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:12.653808 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:12.655150 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:49:12.658104 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:49:12.660634 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:49:12.663412 systemd[1]: Starting systemd-sysext.service... Sep 13 00:49:12.680442 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1545 (bootctl) Sep 13 00:49:12.682364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:49:12.690340 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:49:12.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.695043 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:49:12.704585 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:49:12.704964 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:49:12.722905 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:49:12.844916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:49:12.861968 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:49:12.863694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:49:12.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.864981 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:49:12.867322 systemd-fsck[1557]: fsck.fat 4.2 (2021-01-31) Sep 13 00:49:12.867322 systemd-fsck[1557]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 13 00:49:12.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.872082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:49:12.874699 systemd[1]: Mounting boot.mount... Sep 13 00:49:12.894911 systemd[1]: Mounted boot.mount. Sep 13 00:49:12.896124 (sd-sysext)[1560]: Using extensions 'kubernetes'. Sep 13 00:49:12.896637 (sd-sysext)[1560]: Merged extensions into '/usr'. Sep 13 00:49:12.935557 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:49:12.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.945043 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:12.946934 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:49:12.948268 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:12.950534 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:12.953295 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:12.956185 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:12.957372 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:12.958461 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:12.958851 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:12.972646 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:49:12.974151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:12.974565 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:12.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.977036 systemd[1]: Finished systemd-sysext.service. Sep 13 00:49:12.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.978424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:12.978750 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:12.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.980239 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:12.980592 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:12.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:12.987217 systemd[1]: Starting ensure-sysext.service... Sep 13 00:49:12.988315 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:12.989099 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:12.993112 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:49:13.001355 systemd[1]: Reloading. Sep 13 00:49:13.017997 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:49:13.020743 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:49:13.026829 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:49:13.090443 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2025-09-13T00:49:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:49:13.090484 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2025-09-13T00:49:13Z" level=info msg="torcx already run" Sep 13 00:49:13.291607 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:13.291826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:13.315481 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:13.388781 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:49:13.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.395290 systemd[1]: Starting audit-rules.service... Sep 13 00:49:13.398173 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:49:13.401187 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:49:13.409322 systemd[1]: Starting systemd-resolved.service... Sep 13 00:49:13.416565 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:49:13.419910 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:49:13.432434 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.436439 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:13.441406 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:13.444123 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:13.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.449052 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.449295 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:13.450880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:13.451117 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:13.457817 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.459774 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:13.460585 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.460805 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:13.461778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:13.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.462081 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:13.464226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:13.464446 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:13.466599 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:13.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.475817 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:49:13.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.481335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:13.482172 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:13.487079 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.492000 audit[1682]: SYSTEM_BOOT pid=1682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.495259 systemd[1]: Starting modprobe@drm.service... Sep 13 00:49:13.499504 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:13.504648 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:13.506697 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.506964 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:13.507180 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:49:13.508617 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:49:13.508856 systemd[1]: Finished modprobe@drm.service. Sep 13 00:49:13.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.520463 systemd[1]: Finished ensure-sysext.service. Sep 13 00:49:13.525382 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:49:13.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.528237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:13.528478 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:13.529433 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:13.529662 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:13.531375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:13.531425 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.580185 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:49:13.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:13.606000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:49:13.606000 audit[1714]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac76a560 a2=420 a3=0 items=0 ppid=1674 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:13.606000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:49:13.608279 augenrules[1714]: No rules Sep 13 00:49:13.609443 systemd[1]: Finished audit-rules.service. Sep 13 00:49:13.656858 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:49:13.657561 systemd[1]: Reached target time-set.target. Sep 13 00:49:13.670782 systemd-resolved[1677]: Positive Trust Anchors: Sep 13 00:49:13.670797 systemd-resolved[1677]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:49:13.670831 systemd-resolved[1677]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:49:13.674258 systemd-timesyncd[1679]: Contacted time server 23.186.168.131:123 (0.flatcar.pool.ntp.org). Sep 13 00:49:13.674705 systemd-timesyncd[1679]: Initial clock synchronization to Sat 2025-09-13 00:49:13.634986 UTC. Sep 13 00:49:13.704582 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:13.704602 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:13.716144 systemd-resolved[1677]: Defaulting to hostname 'linux'. Sep 13 00:49:13.718426 systemd[1]: Started systemd-resolved.service. Sep 13 00:49:13.719105 systemd[1]: Reached target network.target. Sep 13 00:49:13.719648 systemd[1]: Reached target nss-lookup.target. Sep 13 00:49:13.722389 ldconfig[1544]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:49:13.732131 systemd[1]: Finished ldconfig.service. Sep 13 00:49:13.734042 systemd[1]: Starting systemd-update-done.service... Sep 13 00:49:13.743335 systemd[1]: Finished systemd-update-done.service. Sep 13 00:49:13.744145 systemd[1]: Reached target sysinit.target. Sep 13 00:49:13.744853 systemd[1]: Started motdgen.path. Sep 13 00:49:13.745559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:49:13.746356 systemd[1]: Started logrotate.timer. Sep 13 00:49:13.747054 systemd[1]: Started mdadm.timer. Sep 13 00:49:13.747606 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:49:13.748395 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:49:13.748436 systemd[1]: Reached target paths.target. Sep 13 00:49:13.748950 systemd[1]: Reached target timers.target. Sep 13 00:49:13.749843 systemd[1]: Listening on dbus.socket. Sep 13 00:49:13.751590 systemd[1]: Starting docker.socket... Sep 13 00:49:13.754745 systemd[1]: Listening on sshd.socket. Sep 13 00:49:13.755496 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:13.756215 systemd[1]: Listening on docker.socket. Sep 13 00:49:13.756799 systemd[1]: Reached target sockets.target. Sep 13 00:49:13.757446 systemd[1]: Reached target basic.target. Sep 13 00:49:13.758202 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:49:13.758271 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.758307 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:49:13.759923 systemd[1]: Starting containerd.service... Sep 13 00:49:13.761979 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:49:13.764494 systemd[1]: Starting dbus.service... Sep 13 00:49:13.769044 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:49:13.774281 systemd[1]: Starting extend-filesystems.service... Sep 13 00:49:13.774793 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:49:13.776499 systemd[1]: Starting motdgen.service... Sep 13 00:49:13.784652 systemd[1]: Starting prepare-helm.service... Sep 13 00:49:13.788274 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:49:13.790612 jq[1731]: false Sep 13 00:49:13.791170 systemd[1]: Starting sshd-keygen.service... Sep 13 00:49:13.801097 systemd[1]: Starting systemd-logind.service... Sep 13 00:49:13.801963 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:13.802072 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:49:13.805139 systemd[1]: Starting update-engine.service... Sep 13 00:49:13.807753 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:49:13.815618 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:49:13.816014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:49:13.825566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:49:13.825926 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:49:13.853036 jq[1744]: true Sep 13 00:49:13.880180 extend-filesystems[1732]: Found loop1 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p1 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p2 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p3 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found usr Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p4 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p6 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p7 Sep 13 00:49:13.881249 extend-filesystems[1732]: Found nvme0n1p9 Sep 13 00:49:13.881249 extend-filesystems[1732]: Checking size of /dev/nvme0n1p9 Sep 13 00:49:13.902764 tar[1747]: linux-amd64/helm Sep 13 00:49:13.912245 jq[1759]: true Sep 13 00:49:13.919401 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:49:13.919833 systemd[1]: Finished motdgen.service. Sep 13 00:49:13.944765 dbus-daemon[1730]: [system] SELinux support is enabled Sep 13 00:49:13.945047 systemd[1]: Started dbus.service. Sep 13 00:49:13.948834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:49:13.948901 systemd[1]: Reached target system-config.target. Sep 13 00:49:13.949517 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:49:13.949549 systemd[1]: Reached target user-config.target. Sep 13 00:49:13.953474 dbus-daemon[1730]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:49:13.965960 extend-filesystems[1732]: Resized partition /dev/nvme0n1p9 Sep 13 00:49:13.970283 extend-filesystems[1787]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:49:13.971101 dbus-daemon[1730]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:49:13.976480 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:49:13.979228 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:49:14.026109 update_engine[1742]: I0913 00:49:14.025115 1742 main.cc:92] Flatcar Update Engine starting Sep 13 00:49:14.040210 systemd[1]: Started update-engine.service. Sep 13 00:49:14.043378 systemd[1]: Started locksmithd.service. Sep 13 00:49:14.044792 update_engine[1742]: I0913 00:49:14.044757 1742 update_check_scheduler.cc:74] Next update check in 11m19s Sep 13 00:49:14.065895 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:49:14.080588 extend-filesystems[1787]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:49:14.080588 extend-filesystems[1787]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:49:14.080588 extend-filesystems[1787]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:49:14.087990 extend-filesystems[1732]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:49:14.093616 bash[1797]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:49:14.081714 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:49:14.082073 systemd[1]: Finished extend-filesystems.service. Sep 13 00:49:14.085307 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:49:14.132990 systemd-networkd[1433]: eth0: Gained IPv6LL Sep 13 00:49:14.135864 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:49:14.136769 systemd[1]: Reached target network-online.target. Sep 13 00:49:14.139355 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:49:14.144718 env[1756]: time="2025-09-13T00:49:14.144663593Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:49:14.149544 systemd[1]: Starting kubelet.service... Sep 13 00:49:14.158769 systemd[1]: Started nvidia.service. Sep 13 00:49:14.281240 systemd-logind[1741]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:49:14.281274 systemd-logind[1741]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:49:14.281298 systemd-logind[1741]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:49:14.290638 systemd-logind[1741]: New seat seat0. Sep 13 00:49:14.316741 systemd[1]: Started systemd-logind.service. Sep 13 00:49:14.377246 amazon-ssm-agent[1807]: 2025/09/13 00:49:14 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:49:14.379793 amazon-ssm-agent[1807]: Initializing new seelog logger Sep 13 00:49:14.380008 amazon-ssm-agent[1807]: New Seelog Logger Creation Complete Sep 13 00:49:14.380102 amazon-ssm-agent[1807]: 2025/09/13 00:49:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:49:14.380102 amazon-ssm-agent[1807]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:49:14.380394 amazon-ssm-agent[1807]: 2025/09/13 00:49:14 processing appconfig overrides Sep 13 00:49:14.427393 env[1756]: time="2025-09-13T00:49:14.427307848Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:49:14.442388 env[1756]: time="2025-09-13T00:49:14.442345037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.450409 env[1756]: time="2025-09-13T00:49:14.450352556Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:14.450574 env[1756]: time="2025-09-13T00:49:14.450553545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.451072 env[1756]: time="2025-09-13T00:49:14.451039045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:14.454675 env[1756]: time="2025-09-13T00:49:14.454634462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.458226 env[1756]: time="2025-09-13T00:49:14.458182914Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:49:14.458393 env[1756]: time="2025-09-13T00:49:14.458372609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.458614 env[1756]: time="2025-09-13T00:49:14.458592826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.460500 env[1756]: time="2025-09-13T00:49:14.460464323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:14.468106 env[1756]: time="2025-09-13T00:49:14.468028851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:14.468301 env[1756]: time="2025-09-13T00:49:14.468278576Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:49:14.468529 env[1756]: time="2025-09-13T00:49:14.468497620Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:49:14.468642 env[1756]: time="2025-09-13T00:49:14.468623768Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:49:14.482576 env[1756]: time="2025-09-13T00:49:14.482376852Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:49:14.482576 env[1756]: time="2025-09-13T00:49:14.482437572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:49:14.482576 env[1756]: time="2025-09-13T00:49:14.482459836Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:49:14.482576 env[1756]: time="2025-09-13T00:49:14.482526549Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.482552898Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.482960122Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.482986944Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483009883Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483033126Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483053641Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483073527Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483093521Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483246979Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:49:14.483440 env[1756]: time="2025-09-13T00:49:14.483346109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:49:14.484412 env[1756]: time="2025-09-13T00:49:14.484382874Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:49:14.484702 env[1756]: time="2025-09-13T00:49:14.484680114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.484834 env[1756]: time="2025-09-13T00:49:14.484815478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:49:14.485019 env[1756]: time="2025-09-13T00:49:14.485001494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485242 env[1756]: time="2025-09-13T00:49:14.485222060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485357 env[1756]: time="2025-09-13T00:49:14.485338995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485455 env[1756]: time="2025-09-13T00:49:14.485439761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485563 env[1756]: time="2025-09-13T00:49:14.485545464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485665 env[1756]: time="2025-09-13T00:49:14.485649657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485782 env[1756]: time="2025-09-13T00:49:14.485763243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.485912 env[1756]: time="2025-09-13T00:49:14.485895528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.486033 env[1756]: time="2025-09-13T00:49:14.486017723Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:49:14.486340 env[1756]: time="2025-09-13T00:49:14.486306877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.486450 env[1756]: time="2025-09-13T00:49:14.486431762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.486555 env[1756]: time="2025-09-13T00:49:14.486538967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.486651 env[1756]: time="2025-09-13T00:49:14.486635651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:49:14.486756 env[1756]: time="2025-09-13T00:49:14.486736617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:49:14.486843 env[1756]: time="2025-09-13T00:49:14.486828088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:49:14.486967 env[1756]: time="2025-09-13T00:49:14.486950708Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:49:14.487100 env[1756]: time="2025-09-13T00:49:14.487084442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:49:14.487653 env[1756]: time="2025-09-13T00:49:14.487561806Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.487817881Z" level=info msg="Connect containerd service" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.487898590Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.488973653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.489108387Z" level=info msg="Start subscribing containerd event" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.489176263Z" level=info msg="Start recovering state" Sep 13 00:49:14.491351 env[1756]: time="2025-09-13T00:49:14.489264758Z" level=info msg="Start event monitor" Sep 13 00:49:14.493992 env[1756]: time="2025-09-13T00:49:14.493941975Z" level=info msg="Start snapshots syncer" Sep 13 00:49:14.495063 env[1756]: time="2025-09-13T00:49:14.495038526Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:49:14.504006 env[1756]: time="2025-09-13T00:49:14.503936199Z" level=info msg="Start streaming server" Sep 13 00:49:14.504354 env[1756]: time="2025-09-13T00:49:14.495000650Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:49:14.504567 env[1756]: time="2025-09-13T00:49:14.504548914Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:49:14.504860 systemd[1]: Started containerd.service. Sep 13 00:49:14.506153 env[1756]: time="2025-09-13T00:49:14.506121223Z" level=info msg="containerd successfully booted in 0.454255s" Sep 13 00:49:14.519508 dbus-daemon[1730]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:49:14.519704 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:49:14.522096 dbus-daemon[1730]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1788 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:49:14.526606 systemd[1]: Starting polkit.service... Sep 13 00:49:14.557418 polkitd[1868]: Started polkitd version 121 Sep 13 00:49:14.584351 polkitd[1868]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:49:14.585551 polkitd[1868]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:49:14.597758 polkitd[1868]: Finished loading, compiling and executing 2 rules Sep 13 00:49:14.598973 dbus-daemon[1730]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:49:14.599172 systemd[1]: Started polkit.service. Sep 13 00:49:14.601645 polkitd[1868]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:49:14.625211 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:49:14.639605 systemd-hostnamed[1788]: Hostname set to (transient) Sep 13 00:49:14.639748 systemd-resolved[1677]: System hostname changed to 'ip-172-31-30-243'. Sep 13 00:49:14.744221 coreos-metadata[1728]: Sep 13 00:49:14.744 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:49:14.752804 coreos-metadata[1728]: Sep 13 00:49:14.752 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:49:14.753864 coreos-metadata[1728]: Sep 13 00:49:14.753 INFO Fetch successful Sep 13 00:49:14.754019 coreos-metadata[1728]: Sep 13 00:49:14.753 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:49:14.755215 coreos-metadata[1728]: Sep 13 00:49:14.755 INFO Fetch successful Sep 13 00:49:14.758992 unknown[1728]: wrote ssh authorized keys file for user: core Sep 13 00:49:14.784896 update-ssh-keys[1914]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:49:14.785717 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:49:15.027911 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Create new startup processor Sep 13 00:49:15.028091 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:49:15.028150 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing bookkeeping folders Sep 13 00:49:15.028150 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO removing the completed state files Sep 13 00:49:15.028150 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing healthcheck folders for long running plugins Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing locations for inventory plugin Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing default location for custom inventory Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing default location for file inventory Sep 13 00:49:15.028267 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Initializing default location for role inventory Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Init the cloudwatchlogs publisher Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:49:15.028497 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO OS: linux, Arch: amd64 Sep 13 00:49:15.036097 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:49:15.036097 amazon-ssm-agent[1807]: datastore file /var/lib/amazon/ssm/i-05f1760e9e466e774/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:49:15.131789 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:49:15.227313 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:49:15.321383 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] listening reply. Sep 13 00:49:15.416007 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:49:15.442467 tar[1747]: linux-amd64/LICENSE Sep 13 00:49:15.443056 tar[1747]: linux-amd64/README.md Sep 13 00:49:15.454645 systemd[1]: Finished prepare-helm.service. Sep 13 00:49:15.512354 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:49:15.608420 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:49:15.654356 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:49:15.703664 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:49:15.799155 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:49:15.895211 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [instanceID=i-05f1760e9e466e774] Starting association polling Sep 13 00:49:15.991039 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:49:16.088158 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:49:16.188461 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:49:16.190617 systemd[1]: Started kubelet.service. Sep 13 00:49:16.284953 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:49:16.382353 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:49:16.407742 sshd_keygen[1763]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:49:16.438459 systemd[1]: Finished sshd-keygen.service. Sep 13 00:49:16.441683 systemd[1]: Starting issuegen.service... Sep 13 00:49:16.453720 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:49:16.454056 systemd[1]: Finished issuegen.service. Sep 13 00:49:16.457340 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:49:16.467585 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:49:16.470858 systemd[1]: Started getty@tty1.service. Sep 13 00:49:16.475754 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:49:16.477450 systemd[1]: Reached target getty.target. Sep 13 00:49:16.479103 systemd[1]: Reached target multi-user.target. Sep 13 00:49:16.479567 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [OfflineService] Starting document processing engine... Sep 13 00:49:16.484048 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:49:16.493250 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:49:16.493585 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:49:16.498079 systemd[1]: Startup finished in 7.776s (kernel) + 9.569s (userspace) = 17.346s. Sep 13 00:49:16.577031 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:49:16.674404 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:49:16.771884 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [OfflineService] Starting message polling Sep 13 00:49:16.869458 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [OfflineService] Starting send replies to MDS Sep 13 00:49:16.967321 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-05f1760e9e466e774, requestId: df7b4416-ed99-45e3-945b-84520f63f4bc Sep 13 00:49:17.065406 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:49:17.070609 kubelet[1946]: E0913 00:49:17.070554 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:49:17.072628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:49:17.072842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:49:17.163649 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:49:17.262019 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:49:17.360736 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:49:17.459519 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:49:17.558541 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:49:17.657837 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:49:17.757213 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:49:17.856863 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05f1760e9e466e774?role=subscribe&stream=input Sep 13 00:49:17.956654 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05f1760e9e466e774?role=subscribe&stream=input Sep 13 00:49:18.056744 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:49:18.157003 amazon-ssm-agent[1807]: 2025-09-13 00:49:15 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:49:23.694357 systemd[1]: Created slice system-sshd.slice. Sep 13 00:49:23.695957 systemd[1]: Started sshd@0-172.31.30.243:22-147.75.109.163:34802.service. Sep 13 00:49:23.896366 sshd[1972]: Accepted publickey for core from 147.75.109.163 port 34802 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:23.900083 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:23.913655 systemd[1]: Created slice user-500.slice. Sep 13 00:49:23.914911 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:49:23.919728 systemd-logind[1741]: New session 1 of user core. Sep 13 00:49:23.927169 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:49:23.928987 systemd[1]: Starting user@500.service... Sep 13 00:49:23.939646 (systemd)[1977]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:24.033863 systemd[1977]: Queued start job for default target default.target. Sep 13 00:49:24.034160 systemd[1977]: Reached target paths.target. Sep 13 00:49:24.034182 systemd[1977]: Reached target sockets.target. Sep 13 00:49:24.034197 systemd[1977]: Reached target timers.target. Sep 13 00:49:24.034210 systemd[1977]: Reached target basic.target. Sep 13 00:49:24.034587 systemd[1]: Started user@500.service. Sep 13 00:49:24.035558 systemd[1]: Started session-1.scope. Sep 13 00:49:24.035903 systemd[1977]: Reached target default.target. Sep 13 00:49:24.036179 systemd[1977]: Startup finished in 89ms. Sep 13 00:49:24.172513 systemd[1]: Started sshd@1-172.31.30.243:22-147.75.109.163:34804.service. Sep 13 00:49:24.339725 sshd[1986]: Accepted publickey for core from 147.75.109.163 port 34804 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:24.341104 sshd[1986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:24.346938 systemd[1]: Started session-2.scope. Sep 13 00:49:24.347726 systemd-logind[1741]: New session 2 of user core. Sep 13 00:49:24.474221 sshd[1986]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:24.477586 systemd[1]: sshd@1-172.31.30.243:22-147.75.109.163:34804.service: Deactivated successfully. Sep 13 00:49:24.479507 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:49:24.480141 systemd-logind[1741]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:49:24.481626 systemd-logind[1741]: Removed session 2. Sep 13 00:49:24.497406 systemd[1]: Started sshd@2-172.31.30.243:22-147.75.109.163:34820.service. Sep 13 00:49:24.656422 sshd[1993]: Accepted publickey for core from 147.75.109.163 port 34820 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:24.658417 sshd[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:24.664233 systemd[1]: Started session-3.scope. Sep 13 00:49:24.665310 systemd-logind[1741]: New session 3 of user core. Sep 13 00:49:24.786453 sshd[1993]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:24.789524 systemd[1]: sshd@2-172.31.30.243:22-147.75.109.163:34820.service: Deactivated successfully. Sep 13 00:49:24.791016 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:49:24.791665 systemd-logind[1741]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:49:24.792752 systemd-logind[1741]: Removed session 3. Sep 13 00:49:24.808720 systemd[1]: Started sshd@3-172.31.30.243:22-147.75.109.163:34822.service. Sep 13 00:49:24.969916 sshd[2000]: Accepted publickey for core from 147.75.109.163 port 34822 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:24.971273 sshd[2000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:24.976756 systemd[1]: Started session-4.scope. Sep 13 00:49:24.977080 systemd-logind[1741]: New session 4 of user core. Sep 13 00:49:25.102643 sshd[2000]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:25.105804 systemd[1]: sshd@3-172.31.30.243:22-147.75.109.163:34822.service: Deactivated successfully. Sep 13 00:49:25.107054 systemd-logind[1741]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:49:25.107165 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:49:25.108933 systemd-logind[1741]: Removed session 4. Sep 13 00:49:25.124446 systemd[1]: Started sshd@4-172.31.30.243:22-147.75.109.163:34826.service. Sep 13 00:49:25.280564 sshd[2007]: Accepted publickey for core from 147.75.109.163 port 34826 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:25.281637 sshd[2007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:25.287257 systemd[1]: Started session-5.scope. Sep 13 00:49:25.287534 systemd-logind[1741]: New session 5 of user core. Sep 13 00:49:25.415970 sudo[2011]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:49:25.416222 sudo[2011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:49:25.425113 dbus-daemon[1730]: \xd0\xfd\u0008\xa1\xeaU: received setenforce notice (enforcing=-1666040016) Sep 13 00:49:25.427054 sudo[2011]: pam_unix(sudo:session): session closed for user root Sep 13 00:49:25.450818 sshd[2007]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:25.454409 systemd[1]: sshd@4-172.31.30.243:22-147.75.109.163:34826.service: Deactivated successfully. Sep 13 00:49:25.455393 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:49:25.455769 systemd-logind[1741]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:49:25.456747 systemd-logind[1741]: Removed session 5. Sep 13 00:49:25.475108 systemd[1]: Started sshd@5-172.31.30.243:22-147.75.109.163:34828.service. Sep 13 00:49:25.632788 sshd[2015]: Accepted publickey for core from 147.75.109.163 port 34828 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:25.633803 sshd[2015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:25.639733 systemd[1]: Started session-6.scope. Sep 13 00:49:25.640238 systemd-logind[1741]: New session 6 of user core. Sep 13 00:49:25.744155 sudo[2020]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:49:25.744441 sudo[2020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:49:25.748073 sudo[2020]: pam_unix(sudo:session): session closed for user root Sep 13 00:49:25.753704 sudo[2019]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:49:25.753991 sudo[2019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:49:25.764770 systemd[1]: Stopping audit-rules.service... Sep 13 00:49:25.765000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:49:25.767225 kernel: kauditd_printk_skb: 174 callbacks suppressed Sep 13 00:49:25.767275 kernel: audit: type=1305 audit(1757724565.765:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:49:25.765000 audit[2023]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec349f480 a2=420 a3=0 items=0 ppid=1 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:25.769765 auditctl[2023]: No rules Sep 13 00:49:25.770269 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:49:25.770505 systemd[1]: Stopped audit-rules.service. Sep 13 00:49:25.772643 systemd[1]: Starting audit-rules.service... Sep 13 00:49:25.774261 kernel: audit: type=1300 audit(1757724565.765:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec349f480 a2=420 a3=0 items=0 ppid=1 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:25.774315 kernel: audit: type=1327 audit(1757724565.765:156): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:49:25.765000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:49:25.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.783204 kernel: audit: type=1131 audit(1757724565.769:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.818780 augenrules[2041]: No rules Sep 13 00:49:25.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.819669 systemd[1]: Finished audit-rules.service. Sep 13 00:49:25.821183 sudo[2019]: pam_unix(sudo:session): session closed for user root Sep 13 00:49:25.838168 kernel: audit: type=1130 audit(1757724565.819:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.820000 audit[2019]: USER_END pid=2019 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.820000 audit[2019]: CRED_DISP pid=2019 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.852587 sshd[2015]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:25.855148 kernel: audit: type=1106 audit(1757724565.820:159): pid=2019 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.855251 kernel: audit: type=1104 audit(1757724565.820:160): pid=2019 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.850000 audit[2015]: USER_END pid=2015 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:25.856822 systemd-logind[1741]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:49:25.857840 systemd[1]: sshd@5-172.31.30.243:22-147.75.109.163:34828.service: Deactivated successfully. Sep 13 00:49:25.858803 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:49:25.860244 systemd-logind[1741]: Removed session 6. Sep 13 00:49:25.866292 kernel: audit: type=1106 audit(1757724565.850:161): pid=2015 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:25.866404 kernel: audit: type=1104 audit(1757724565.850:162): pid=2015 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:25.866493 kernel: audit: type=1131 audit(1757724565.854:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.243:22-147.75.109.163:34828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.850000 audit[2015]: CRED_DISP pid=2015 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:25.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.243:22-147.75.109.163:34828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:25.876453 systemd[1]: Started sshd@6-172.31.30.243:22-147.75.109.163:34830.service. Sep 13 00:49:25.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.243:22-147.75.109.163:34830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:26.036000 audit[2048]: USER_ACCT pid=2048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:26.037500 sshd[2048]: Accepted publickey for core from 147.75.109.163 port 34830 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:26.037000 audit[2048]: CRED_ACQ pid=2048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:26.037000 audit[2048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc25099af0 a2=3 a3=0 items=0 ppid=1 pid=2048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:49:26.038882 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:26.045569 systemd[1]: Started session-7.scope. Sep 13 00:49:26.045836 systemd-logind[1741]: New session 7 of user core. Sep 13 00:49:26.051000 audit[2048]: USER_START pid=2048 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:26.053000 audit[2051]: CRED_ACQ pid=2051 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:49:26.150000 audit[2052]: USER_ACCT pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:26.151067 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:49:26.150000 audit[2052]: CRED_REFR pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:26.151311 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:49:26.152000 audit[2052]: USER_START pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:49:26.178973 systemd[1]: Starting docker.service... Sep 13 00:49:26.220680 env[2062]: time="2025-09-13T00:49:26.220625816Z" level=info msg="Starting up" Sep 13 00:49:26.222524 env[2062]: time="2025-09-13T00:49:26.222486653Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:49:26.222524 env[2062]: time="2025-09-13T00:49:26.222511275Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:49:26.222686 env[2062]: time="2025-09-13T00:49:26.222536117Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:49:26.222686 env[2062]: time="2025-09-13T00:49:26.222549667Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:49:26.227437 env[2062]: time="2025-09-13T00:49:26.227391518Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:49:26.227437 env[2062]: time="2025-09-13T00:49:26.227417092Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:49:26.227437 env[2062]: time="2025-09-13T00:49:26.227438989Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:49:26.227716 env[2062]: time="2025-09-13T00:49:26.227451446Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:49:26.237025 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1717782026-merged.mount: Deactivated successfully. Sep 13 00:49:26.273525 env[2062]: time="2025-09-13T00:49:26.273492648Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:49:26.273724 env[2062]: time="2025-09-13T00:49:26.273712282Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:49:26.274003 env[2062]: time="2025-09-13T00:49:26.273986065Z" level=info msg="Loading containers: start." Sep 13 00:49:26.372000 audit[2092]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.372000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffed335f850 a2=0 a3=7ffed335f83c items=0 ppid=2062 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:49:26.375000 audit[2094]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.375000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffec7ad80c0 a2=0 a3=7ffec7ad80ac items=0 ppid=2062 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:49:26.377000 audit[2096]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.377000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0f538860 a2=0 a3=7ffd0f53884c items=0 ppid=2062 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.377000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:49:26.379000 audit[2098]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.379000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1701c170 a2=0 a3=7ffc1701c15c items=0 ppid=2062 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.379000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:49:26.390000 audit[2100]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.390000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda32dcc40 a2=0 a3=7ffda32dcc2c items=0 ppid=2062 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.390000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:49:26.411000 audit[2105]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.411000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd4cce4260 a2=0 a3=7ffd4cce424c items=0 ppid=2062 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.411000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:49:26.420000 audit[2107]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.420000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdde508620 a2=0 a3=7ffdde50860c items=0 ppid=2062 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:49:26.423000 audit[2109]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.423000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe927c9bd0 a2=0 a3=7ffe927c9bbc items=0 ppid=2062 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.423000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:49:26.425000 audit[2111]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.425000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc6241a1a0 a2=0 a3=7ffc6241a18c items=0 ppid=2062 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.425000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:49:26.435000 audit[2115]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.435000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe85a52c30 a2=0 a3=7ffe85a52c1c items=0 ppid=2062 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.435000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:49:26.440000 audit[2116]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.440000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe54922b90 a2=0 a3=7ffe54922b7c items=0 ppid=2062 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.440000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:49:26.452901 kernel: Initializing XFRM netlink socket Sep 13 00:49:26.515475 env[2062]: time="2025-09-13T00:49:26.515437225Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:49:26.518370 (udev-worker)[2072]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:49:26.552000 audit[2124]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.552000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd26c9df00 a2=0 a3=7ffd26c9deec items=0 ppid=2062 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.552000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:49:26.564000 audit[2127]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.564000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdd90a82e0 a2=0 a3=7ffdd90a82cc items=0 ppid=2062 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.564000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:49:26.567000 audit[2130]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.567000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdf584ddd0 a2=0 a3=7ffdf584ddbc items=0 ppid=2062 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.567000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:49:26.570000 audit[2132]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.570000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd32a53b70 a2=0 a3=7ffd32a53b5c items=0 ppid=2062 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.570000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:49:26.572000 audit[2134]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.572000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fffbd2b9120 a2=0 a3=7fffbd2b910c items=0 ppid=2062 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:49:26.575000 audit[2136]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.575000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcb450c4e0 a2=0 a3=7ffcb450c4cc items=0 ppid=2062 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.575000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:49:26.577000 audit[2138]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.577000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdd7b92f00 a2=0 a3=7ffdd7b92eec items=0 ppid=2062 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.577000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:49:26.599000 audit[2141]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.599000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff75e153e0 a2=0 a3=7fff75e153cc items=0 ppid=2062 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.599000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:49:26.601000 audit[2143]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.601000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc16470d50 a2=0 a3=7ffc16470d3c items=0 ppid=2062 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:49:26.604000 audit[2145]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.604000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec9bfa600 a2=0 a3=7ffec9bfa5ec items=0 ppid=2062 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.604000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:49:26.607000 audit[2147]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.607000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff966f8330 a2=0 a3=7fff966f831c items=0 ppid=2062 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.607000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:49:26.608819 systemd-networkd[1433]: docker0: Link UP Sep 13 00:49:26.620000 audit[2151]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.620000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4e180620 a2=0 a3=7ffe4e18060c items=0 ppid=2062 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.620000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:49:26.625000 audit[2152]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:26.625000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb35bd960 a2=0 a3=7ffcb35bd94c items=0 ppid=2062 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:26.625000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:49:26.629188 env[2062]: time="2025-09-13T00:49:26.626620334Z" level=info msg="Loading containers: done." Sep 13 00:49:26.649222 env[2062]: time="2025-09-13T00:49:26.649167010Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:49:26.649420 env[2062]: time="2025-09-13T00:49:26.649357314Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:49:26.649475 env[2062]: time="2025-09-13T00:49:26.649458159Z" level=info msg="Daemon has completed initialization" Sep 13 00:49:26.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:26.668565 systemd[1]: Started docker.service. Sep 13 00:49:26.678967 env[2062]: time="2025-09-13T00:49:26.678901639Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:49:27.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:27.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:27.169994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:49:27.170300 systemd[1]: Stopped kubelet.service. Sep 13 00:49:27.172083 systemd[1]: Starting kubelet.service... Sep 13 00:49:27.475540 systemd[1]: Started kubelet.service. Sep 13 00:49:27.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:27.536268 kubelet[2190]: E0913 00:49:27.536215 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:49:27.539486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:49:27.539699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:49:27.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:49:27.756479 env[1756]: time="2025-09-13T00:49:27.756343025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:49:28.369830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834518069.mount: Deactivated successfully. Sep 13 00:49:29.919019 env[1756]: time="2025-09-13T00:49:29.918955995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:29.921584 env[1756]: time="2025-09-13T00:49:29.921542808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:29.923602 env[1756]: time="2025-09-13T00:49:29.923554118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:29.925610 env[1756]: time="2025-09-13T00:49:29.925580288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:29.926220 env[1756]: time="2025-09-13T00:49:29.926187877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:49:29.927158 env[1756]: time="2025-09-13T00:49:29.927129133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:49:31.613947 env[1756]: time="2025-09-13T00:49:31.613885879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:31.616927 env[1756]: time="2025-09-13T00:49:31.616866175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:31.619067 env[1756]: time="2025-09-13T00:49:31.619025295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:31.621203 env[1756]: time="2025-09-13T00:49:31.621170995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:31.622107 env[1756]: time="2025-09-13T00:49:31.622067904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:49:31.623774 env[1756]: time="2025-09-13T00:49:31.623725084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:49:33.042595 env[1756]: time="2025-09-13T00:49:33.042538447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:33.046101 env[1756]: time="2025-09-13T00:49:33.046059882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:33.048544 env[1756]: time="2025-09-13T00:49:33.048496979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:33.051278 env[1756]: time="2025-09-13T00:49:33.051233480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:33.052211 env[1756]: time="2025-09-13T00:49:33.052174631Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:49:33.053641 env[1756]: time="2025-09-13T00:49:33.053611961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:49:34.123001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489026183.mount: Deactivated successfully. Sep 13 00:49:34.779919 env[1756]: time="2025-09-13T00:49:34.779740167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:34.782737 env[1756]: time="2025-09-13T00:49:34.782696811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:34.784653 env[1756]: time="2025-09-13T00:49:34.784612044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:34.786309 env[1756]: time="2025-09-13T00:49:34.786272226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:34.786846 env[1756]: time="2025-09-13T00:49:34.786810404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:49:34.787484 env[1756]: time="2025-09-13T00:49:34.787405961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:49:35.248315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755143033.mount: Deactivated successfully. Sep 13 00:49:36.354439 env[1756]: time="2025-09-13T00:49:36.354376345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.361985 env[1756]: time="2025-09-13T00:49:36.361936840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.367484 env[1756]: time="2025-09-13T00:49:36.367440226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.374064 env[1756]: time="2025-09-13T00:49:36.374015494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.375094 env[1756]: time="2025-09-13T00:49:36.374895801Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:49:36.377593 env[1756]: time="2025-09-13T00:49:36.377546922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:49:36.875146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026297729.mount: Deactivated successfully. Sep 13 00:49:36.890137 env[1756]: time="2025-09-13T00:49:36.890078901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.894459 env[1756]: time="2025-09-13T00:49:36.894407285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.897908 env[1756]: time="2025-09-13T00:49:36.897845441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.900757 env[1756]: time="2025-09-13T00:49:36.900708424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:36.901479 env[1756]: time="2025-09-13T00:49:36.901439304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:49:36.902152 env[1756]: time="2025-09-13T00:49:36.902125804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:49:37.431097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567862959.mount: Deactivated successfully. Sep 13 00:49:37.673757 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 13 00:49:37.673900 kernel: audit: type=1130 audit(1757724577.670:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:37.671284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:49:37.671545 systemd[1]: Stopped kubelet.service. Sep 13 00:49:37.673556 systemd[1]: Starting kubelet.service... Sep 13 00:49:37.681589 kernel: audit: type=1131 audit(1757724577.670:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:37.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:37.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:37.952457 systemd[1]: Started kubelet.service. Sep 13 00:49:37.957955 kernel: audit: type=1130 audit(1757724577.951:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:38.073896 kubelet[2207]: E0913 00:49:38.073835 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:49:38.080671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:49:38.080950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:49:38.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:49:38.086928 kernel: audit: type=1131 audit(1757724578.080:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:49:39.144158 amazon-ssm-agent[1807]: 2025-09-13 00:49:39 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:49:39.855554 env[1756]: time="2025-09-13T00:49:39.855505508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:39.859890 env[1756]: time="2025-09-13T00:49:39.859837554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:39.862998 env[1756]: time="2025-09-13T00:49:39.862945158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:39.865989 env[1756]: time="2025-09-13T00:49:39.865938846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:39.866685 env[1756]: time="2025-09-13T00:49:39.866645886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:49:41.994136 systemd[1]: Stopped kubelet.service. Sep 13 00:49:41.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:41.997957 systemd[1]: Starting kubelet.service... Sep 13 00:49:42.000687 kernel: audit: type=1130 audit(1757724581.993:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:42.000788 kernel: audit: type=1131 audit(1757724581.993:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:41.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:42.041393 systemd[1]: Reloading. Sep 13 00:49:42.175130 /usr/lib/systemd/system-generators/torcx-generator[2259]: time="2025-09-13T00:49:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:49:42.175170 /usr/lib/systemd/system-generators/torcx-generator[2259]: time="2025-09-13T00:49:42Z" level=info msg="torcx already run" Sep 13 00:49:42.321614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:42.321638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:42.350650 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:42.504126 kernel: audit: type=1130 audit(1757724582.498:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:49:42.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:49:42.499037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:49:42.499138 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:49:42.499434 systemd[1]: Stopped kubelet.service. Sep 13 00:49:42.501443 systemd[1]: Starting kubelet.service... Sep 13 00:49:42.910501 kernel: audit: type=1130 audit(1757724582.902:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:42.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:42.902920 systemd[1]: Started kubelet.service. Sep 13 00:49:42.970780 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:42.970780 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:49:42.970780 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:42.971342 kubelet[2329]: I0913 00:49:42.970860 2329 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:49:43.185163 kubelet[2329]: I0913 00:49:43.185122 2329 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:49:43.185163 kubelet[2329]: I0913 00:49:43.185151 2329 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:49:43.185491 kubelet[2329]: I0913 00:49:43.185467 2329 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:49:43.219248 kubelet[2329]: E0913 00:49:43.219202 2329 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.243:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:43.222432 kubelet[2329]: I0913 00:49:43.222372 2329 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:49:43.236312 kubelet[2329]: E0913 00:49:43.236280 2329 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:49:43.236568 kubelet[2329]: I0913 00:49:43.236523 2329 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:49:43.241078 kubelet[2329]: I0913 00:49:43.241045 2329 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:49:43.242081 kubelet[2329]: I0913 00:49:43.241391 2329 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:49:43.242081 kubelet[2329]: I0913 00:49:43.241498 2329 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:49:43.242081 kubelet[2329]: I0913 00:49:43.241523 2329 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-243","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:49:43.242081 kubelet[2329]: I0913 00:49:43.241699 2329 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:49:43.243824 kubelet[2329]: I0913 00:49:43.241708 2329 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:49:43.243824 kubelet[2329]: I0913 00:49:43.241790 2329 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:43.250747 kubelet[2329]: I0913 00:49:43.250710 2329 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:49:43.250930 kubelet[2329]: I0913 00:49:43.250767 2329 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:49:43.250930 kubelet[2329]: I0913 00:49:43.250814 2329 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:49:43.250930 kubelet[2329]: I0913 00:49:43.250839 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:49:43.268712 kubelet[2329]: W0913 00:49:43.268641 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-243&limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:43.268712 kubelet[2329]: E0913 00:49:43.268714 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-243&limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:43.269769 kubelet[2329]: I0913 00:49:43.269145 2329 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:49:43.269769 kubelet[2329]: I0913 00:49:43.269556 2329 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:49:43.274038 kubelet[2329]: W0913 00:49:43.274005 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:49:43.276764 kubelet[2329]: W0913 00:49:43.276706 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:43.276764 kubelet[2329]: E0913 00:49:43.276765 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:43.282750 kubelet[2329]: I0913 00:49:43.282721 2329 server.go:1274] "Started kubelet" Sep 13 00:49:43.290551 kernel: audit: type=1400 audit(1757724583.283:210): avc: denied { mac_admin } for pid=2329 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:43.290670 kernel: audit: type=1401 audit(1757724583.283:210): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:43.283000 audit[2329]: AVC avc: denied { mac_admin } for pid=2329 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:43.283000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:43.290806 kubelet[2329]: I0913 00:49:43.284040 2329 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:49:43.290806 kubelet[2329]: I0913 00:49:43.284088 2329 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:49:43.290806 kubelet[2329]: I0913 00:49:43.284171 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:49:43.290806 kubelet[2329]: I0913 00:49:43.289348 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:49:43.290806 kubelet[2329]: I0913 00:49:43.290225 2329 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:49:43.283000 audit[2329]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000964d80 a1=c000b36090 a2=c000964d50 a3=25 items=0 ppid=1 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:43.301650 kubelet[2329]: I0913 00:49:43.301599 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:49:43.302040 kubelet[2329]: I0913 00:49:43.302024 2329 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:49:43.302384 kubelet[2329]: I0913 00:49:43.302370 2329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:49:43.304475 kubelet[2329]: I0913 00:49:43.304463 2329 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:49:43.304801 kubelet[2329]: E0913 00:49:43.304786 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-243\" not found" Sep 13 00:49:43.305602 kernel: audit: type=1300 audit(1757724583.283:210): arch=c000003e syscall=188 success=no exit=-22 a0=c000964d80 a1=c000b36090 a2=c000964d50 a3=25 items=0 ppid=1 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.305674 kernel: audit: type=1327 audit(1757724583.283:210): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:43.283000 audit[2329]: AVC avc: denied { mac_admin } for pid=2329 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:43.311717 kubelet[2329]: E0913 00:49:43.309462 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.243:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.243:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-243.1864b132178927b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-243,UID:ip-172-31-30-243,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-243,},FirstTimestamp:2025-09-13 00:49:43.282681784 +0000 UTC m=+0.362183566,LastTimestamp:2025-09-13 00:49:43.282681784 +0000 UTC m=+0.362183566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-243,}" Sep 13 00:49:43.311717 kubelet[2329]: E0913 00:49:43.311215 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": dial tcp 172.31.30.243:6443: connect: connection refused" interval="200ms" Sep 13 00:49:43.311717 kubelet[2329]: I0913 00:49:43.311378 2329 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:49:43.311717 kubelet[2329]: I0913 00:49:43.311686 2329 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:49:43.311911 kernel: audit: type=1400 audit(1757724583.283:211): avc: denied { mac_admin } for pid=2329 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:43.312505 kubelet[2329]: W0913 00:49:43.312066 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:43.312505 kubelet[2329]: E0913 00:49:43.312117 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:43.283000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:43.315777 kubelet[2329]: I0913 00:49:43.312854 2329 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:49:43.315777 kubelet[2329]: I0913 00:49:43.312953 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:49:43.315890 kernel: audit: type=1401 audit(1757724583.283:211): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:43.283000 audit[2329]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00096da40 a1=c000b360a8 a2=c000964e10 a3=25 items=0 ppid=1 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.318058 kubelet[2329]: I0913 00:49:43.317176 2329 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:49:43.320216 kubelet[2329]: E0913 00:49:43.320198 2329 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:49:43.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:43.330499 kernel: audit: type=1300 audit(1757724583.283:211): arch=c000003e syscall=188 success=no exit=-22 a0=c00096da40 a1=c000b360a8 a2=c000964e10 a3=25 items=0 ppid=1 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.330594 kernel: audit: type=1327 audit(1757724583.283:211): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:43.330625 kernel: audit: type=1325 audit(1757724583.283:212): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.283000 audit[2341]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.283000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd153ed030 a2=0 a3=7ffd153ed01c items=0 ppid=2329 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:49:43.286000 audit[2342]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.286000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb30bee0 a2=0 a3=7ffcdb30becc items=0 ppid=2329 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:49:43.305000 audit[2344]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.305000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd884d3400 a2=0 a3=7ffd884d33ec items=0 ppid=2329 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:49:43.311000 audit[2346]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.311000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc97e6e4a0 a2=0 a3=7ffc97e6e48c items=0 ppid=2329 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:49:43.347000 audit[2350]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.347000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeb26b4210 a2=0 a3=7ffeb26b41fc items=0 ppid=2329 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:49:43.353286 kubelet[2329]: I0913 00:49:43.353241 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:49:43.354000 audit[2353]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:43.354000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffdaf74a80 a2=0 a3=7fffdaf74a6c items=0 ppid=2329 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:49:43.356848 kubelet[2329]: I0913 00:49:43.356763 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:49:43.356848 kubelet[2329]: I0913 00:49:43.356790 2329 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:49:43.356848 kubelet[2329]: I0913 00:49:43.356811 2329 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:49:43.356000 audit[2354]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.356000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7a6aca30 a2=0 a3=7ffc7a6aca1c items=0 ppid=2329 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.357655 kubelet[2329]: W0913 00:49:43.357620 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:43.357699 kubelet[2329]: E0913 00:49:43.357652 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:43.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:49:43.358492 kubelet[2329]: E0913 00:49:43.358469 2329 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:49:43.359125 kubelet[2329]: I0913 00:49:43.359106 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:49:43.359210 kubelet[2329]: I0913 00:49:43.359201 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:49:43.359273 kubelet[2329]: I0913 00:49:43.359266 2329 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:43.358000 audit[2357]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.358000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc49deb550 a2=0 a3=7ffc49deb53c items=0 ppid=2329 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:49:43.359000 audit[2356]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:43.359000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed0cdce10 a2=0 a3=7ffed0cdcdfc items=0 ppid=2329 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:49:43.360000 audit[2358]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:43.360000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7324ea30 a2=0 a3=7ffd7324ea1c items=0 ppid=2329 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:49:43.361000 audit[2359]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:43.361000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe29efa1d0 a2=0 a3=7ffe29efa1bc items=0 ppid=2329 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:49:43.362000 audit[2360]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:43.362000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcf0791ea0 a2=0 a3=7ffcf0791e8c items=0 ppid=2329 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:49:43.364860 kubelet[2329]: I0913 00:49:43.364842 2329 policy_none.go:49] "None policy: Start" Sep 13 00:49:43.365947 kubelet[2329]: I0913 00:49:43.365930 2329 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:49:43.366193 kubelet[2329]: I0913 00:49:43.366065 2329 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:49:43.373981 kubelet[2329]: I0913 00:49:43.373958 2329 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:49:43.373000 audit[2329]: AVC avc: denied { mac_admin } for pid=2329 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:43.373000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:43.373000 audit[2329]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000712300 a1=c0005ef878 a2=c0007122d0 a3=25 items=0 ppid=1 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:43.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:43.374391 kubelet[2329]: I0913 00:49:43.374370 2329 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:49:43.374546 kubelet[2329]: I0913 00:49:43.374534 2329 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:49:43.374638 kubelet[2329]: I0913 00:49:43.374604 2329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:49:43.376134 kubelet[2329]: I0913 00:49:43.376119 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:49:43.381298 kubelet[2329]: E0913 00:49:43.381274 2329 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-243\" not found" Sep 13 00:49:43.476978 kubelet[2329]: I0913 00:49:43.476838 2329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:43.479255 kubelet[2329]: E0913 00:49:43.478556 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.243:6443/api/v1/nodes\": dial tcp 172.31.30.243:6443: connect: connection refused" node="ip-172-31-30-243" Sep 13 00:49:43.512210 kubelet[2329]: E0913 00:49:43.512163 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": dial tcp 172.31.30.243:6443: connect: connection refused" interval="400ms" Sep 13 00:49:43.612645 kubelet[2329]: I0913 00:49:43.612574 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:43.612645 kubelet[2329]: I0913 00:49:43.612633 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:43.612645 kubelet[2329]: I0913 00:49:43.612651 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:43.612645 kubelet[2329]: I0913 00:49:43.612666 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-ca-certs\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:43.612926 kubelet[2329]: I0913 00:49:43.612682 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:43.612926 kubelet[2329]: I0913 00:49:43.612696 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:43.612926 kubelet[2329]: I0913 00:49:43.612717 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:43.612926 kubelet[2329]: I0913 00:49:43.612732 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:43.612926 kubelet[2329]: I0913 00:49:43.612749 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0499a40e82f48cb370310dada969d3f-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-243\" (UID: \"a0499a40e82f48cb370310dada969d3f\") " pod="kube-system/kube-scheduler-ip-172-31-30-243" Sep 13 00:49:43.680322 kubelet[2329]: I0913 00:49:43.680293 2329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:43.680689 kubelet[2329]: E0913 00:49:43.680640 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.243:6443/api/v1/nodes\": dial tcp 172.31.30.243:6443: connect: connection refused" node="ip-172-31-30-243" Sep 13 00:49:43.767686 env[1756]: time="2025-09-13T00:49:43.767050181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-243,Uid:d6cc1ec91d7ba79ac87688d829322fbf,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:43.770289 env[1756]: time="2025-09-13T00:49:43.770248502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-243,Uid:f46d22d2944f8fc600a4e65fcfb61ed6,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:43.770899 env[1756]: time="2025-09-13T00:49:43.770853018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-243,Uid:a0499a40e82f48cb370310dada969d3f,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:43.913531 kubelet[2329]: E0913 00:49:43.913473 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": dial tcp 172.31.30.243:6443: connect: connection refused" interval="800ms" Sep 13 00:49:44.082808 kubelet[2329]: I0913 00:49:44.082705 2329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:44.083651 kubelet[2329]: E0913 00:49:44.083470 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.243:6443/api/v1/nodes\": dial tcp 172.31.30.243:6443: connect: connection refused" node="ip-172-31-30-243" Sep 13 00:49:44.163792 kubelet[2329]: W0913 00:49:44.163753 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:44.163977 kubelet[2329]: E0913 00:49:44.163803 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:44.244205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197566254.mount: Deactivated successfully. Sep 13 00:49:44.259753 env[1756]: time="2025-09-13T00:49:44.259687351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.265562 env[1756]: time="2025-09-13T00:49:44.265518457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.267744 env[1756]: time="2025-09-13T00:49:44.267706375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.269865 env[1756]: time="2025-09-13T00:49:44.269821547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.274224 env[1756]: time="2025-09-13T00:49:44.274144511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.276263 env[1756]: time="2025-09-13T00:49:44.276200969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.278229 env[1756]: time="2025-09-13T00:49:44.278189772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.282433 env[1756]: time="2025-09-13T00:49:44.282387272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.284020 env[1756]: time="2025-09-13T00:49:44.283980412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.286822 env[1756]: time="2025-09-13T00:49:44.286779906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.289164 env[1756]: time="2025-09-13T00:49:44.289122532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.295367 env[1756]: time="2025-09-13T00:49:44.295327107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:44.373323 env[1756]: time="2025-09-13T00:49:44.373164586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:44.373323 env[1756]: time="2025-09-13T00:49:44.373264105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:44.374337 env[1756]: time="2025-09-13T00:49:44.373297804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:44.374638 env[1756]: time="2025-09-13T00:49:44.374575418Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26 pid=2369 runtime=io.containerd.runc.v2 Sep 13 00:49:44.380538 env[1756]: time="2025-09-13T00:49:44.380270399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:44.380538 env[1756]: time="2025-09-13T00:49:44.380321121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:44.380538 env[1756]: time="2025-09-13T00:49:44.380338886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:44.381140 env[1756]: time="2025-09-13T00:49:44.381063232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/006cd48e3a5862f4214c812e788fde2c12bbfa3665fcf1954d17327a90006549 pid=2385 runtime=io.containerd.runc.v2 Sep 13 00:49:44.393538 env[1756]: time="2025-09-13T00:49:44.393435993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:44.393847 env[1756]: time="2025-09-13T00:49:44.393805402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:44.394034 env[1756]: time="2025-09-13T00:49:44.393994917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:44.394502 env[1756]: time="2025-09-13T00:49:44.394447709Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026 pid=2417 runtime=io.containerd.runc.v2 Sep 13 00:49:44.517745 env[1756]: time="2025-09-13T00:49:44.517698230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-243,Uid:a0499a40e82f48cb370310dada969d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026\"" Sep 13 00:49:44.523411 env[1756]: time="2025-09-13T00:49:44.523364656Z" level=info msg="CreateContainer within sandbox \"773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:49:44.534986 env[1756]: time="2025-09-13T00:49:44.534938837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-243,Uid:f46d22d2944f8fc600a4e65fcfb61ed6,Namespace:kube-system,Attempt:0,} returns sandbox id \"13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26\"" Sep 13 00:49:44.539969 env[1756]: time="2025-09-13T00:49:44.539929670Z" level=info msg="CreateContainer within sandbox \"13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:49:44.541118 env[1756]: time="2025-09-13T00:49:44.541081470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-243,Uid:d6cc1ec91d7ba79ac87688d829322fbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"006cd48e3a5862f4214c812e788fde2c12bbfa3665fcf1954d17327a90006549\"" Sep 13 00:49:44.546061 env[1756]: time="2025-09-13T00:49:44.546019734Z" level=info msg="CreateContainer within sandbox \"006cd48e3a5862f4214c812e788fde2c12bbfa3665fcf1954d17327a90006549\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:49:44.557455 env[1756]: time="2025-09-13T00:49:44.557411329Z" level=info msg="CreateContainer within sandbox \"773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021\"" Sep 13 00:49:44.558647 env[1756]: time="2025-09-13T00:49:44.558593010Z" level=info msg="StartContainer for \"5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021\"" Sep 13 00:49:44.581065 env[1756]: time="2025-09-13T00:49:44.581007282Z" level=info msg="CreateContainer within sandbox \"13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72\"" Sep 13 00:49:44.582546 env[1756]: time="2025-09-13T00:49:44.582517325Z" level=info msg="StartContainer for \"e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72\"" Sep 13 00:49:44.585091 env[1756]: time="2025-09-13T00:49:44.585052261Z" level=info msg="CreateContainer within sandbox \"006cd48e3a5862f4214c812e788fde2c12bbfa3665fcf1954d17327a90006549\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7f85c9d5e769f804863e06a0aa63ab4c3a5064f01dafb6bcf9aebaf1bc99dee\"" Sep 13 00:49:44.586458 env[1756]: time="2025-09-13T00:49:44.586392361Z" level=info msg="StartContainer for \"a7f85c9d5e769f804863e06a0aa63ab4c3a5064f01dafb6bcf9aebaf1bc99dee\"" Sep 13 00:49:44.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:44.679235 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:49:44.700852 kubelet[2329]: W0913 00:49:44.700782 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-243&limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:44.701056 kubelet[2329]: E0913 00:49:44.700889 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-243&limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:44.720926 kubelet[2329]: E0913 00:49:44.720866 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": dial tcp 172.31.30.243:6443: connect: connection refused" interval="1.6s" Sep 13 00:49:44.743641 kubelet[2329]: W0913 00:49:44.743499 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:44.743824 kubelet[2329]: E0913 00:49:44.743644 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:44.743905 env[1756]: time="2025-09-13T00:49:44.743816088Z" level=info msg="StartContainer for \"e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72\" returns successfully" Sep 13 00:49:44.744624 env[1756]: time="2025-09-13T00:49:44.744573344Z" level=info msg="StartContainer for \"5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021\" returns successfully" Sep 13 00:49:44.784821 env[1756]: time="2025-09-13T00:49:44.782496453Z" level=info msg="StartContainer for \"a7f85c9d5e769f804863e06a0aa63ab4c3a5064f01dafb6bcf9aebaf1bc99dee\" returns successfully" Sep 13 00:49:44.870182 kubelet[2329]: W0913 00:49:44.870110 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.243:6443: connect: connection refused Sep 13 00:49:44.870367 kubelet[2329]: E0913 00:49:44.870203 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:44.886101 kubelet[2329]: I0913 00:49:44.886074 2329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:44.886453 kubelet[2329]: E0913 00:49:44.886423 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.243:6443/api/v1/nodes\": dial tcp 172.31.30.243:6443: connect: connection refused" node="ip-172-31-30-243" Sep 13 00:49:45.353617 kubelet[2329]: E0913 00:49:45.353572 2329 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.243:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.243:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:49:46.487848 kubelet[2329]: I0913 00:49:46.487823 2329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:47.397380 kubelet[2329]: E0913 00:49:47.397344 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-243\" not found" node="ip-172-31-30-243" Sep 13 00:49:47.521887 kubelet[2329]: I0913 00:49:47.521818 2329 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-243" Sep 13 00:49:47.521887 kubelet[2329]: E0913 00:49:47.521863 2329 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-243\": node \"ip-172-31-30-243\" not found" Sep 13 00:49:47.538382 kubelet[2329]: E0913 00:49:47.538316 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-243\" not found" Sep 13 00:49:48.276729 kubelet[2329]: I0913 00:49:48.276675 2329 apiserver.go:52] "Watching apiserver" Sep 13 00:49:48.312287 kubelet[2329]: I0913 00:49:48.312230 2329 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:49:49.885592 systemd[1]: Reloading. Sep 13 00:49:49.967027 /usr/lib/systemd/system-generators/torcx-generator[2619]: time="2025-09-13T00:49:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:49:49.967058 /usr/lib/systemd/system-generators/torcx-generator[2619]: time="2025-09-13T00:49:49Z" level=info msg="torcx already run" Sep 13 00:49:50.092674 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:50.092700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:50.115628 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:50.219179 systemd[1]: Stopping kubelet.service... Sep 13 00:49:50.240415 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:49:50.240788 systemd[1]: Stopped kubelet.service. Sep 13 00:49:50.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:50.242579 kernel: kauditd_printk_skb: 40 callbacks suppressed Sep 13 00:49:50.242674 kernel: audit: type=1131 audit(1757724590.239:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:50.249839 systemd[1]: Starting kubelet.service... Sep 13 00:49:51.660545 systemd[1]: Started kubelet.service. Sep 13 00:49:51.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:51.669963 kernel: audit: type=1130 audit(1757724591.659:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:51.772055 kubelet[2691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:51.772055 kubelet[2691]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:49:51.772055 kubelet[2691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:51.772569 kubelet[2691]: I0913 00:49:51.772144 2691 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:49:51.787391 kubelet[2691]: I0913 00:49:51.787345 2691 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:49:51.787391 kubelet[2691]: I0913 00:49:51.787374 2691 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:49:51.788314 kubelet[2691]: I0913 00:49:51.788279 2691 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:49:51.792299 kubelet[2691]: I0913 00:49:51.791511 2691 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:49:51.797560 kubelet[2691]: I0913 00:49:51.796644 2691 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:49:51.805500 kubelet[2691]: E0913 00:49:51.805471 2691 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:49:51.805754 kubelet[2691]: I0913 00:49:51.805742 2691 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:49:51.809728 kubelet[2691]: I0913 00:49:51.809706 2691 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:49:51.811718 kubelet[2691]: I0913 00:49:51.811694 2691 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:49:51.811905 kubelet[2691]: I0913 00:49:51.811855 2691 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:49:51.813315 kubelet[2691]: I0913 00:49:51.811910 2691 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-243","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:49:51.813637 kubelet[2691]: I0913 00:49:51.813620 2691 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:49:51.813708 kubelet[2691]: I0913 00:49:51.813645 2691 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:49:51.813708 kubelet[2691]: I0913 00:49:51.813689 2691 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:51.813830 kubelet[2691]: I0913 00:49:51.813818 2691 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:49:51.813894 kubelet[2691]: I0913 00:49:51.813841 2691 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:49:51.813936 kubelet[2691]: I0913 00:49:51.813910 2691 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:49:51.813936 kubelet[2691]: I0913 00:49:51.813926 2691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:49:51.825667 kubelet[2691]: I0913 00:49:51.820736 2691 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:49:51.825667 kubelet[2691]: I0913 00:49:51.821916 2691 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:49:51.826041 kubelet[2691]: I0913 00:49:51.826003 2691 server.go:1274] "Started kubelet" Sep 13 00:49:51.861122 kubelet[2691]: I0913 00:49:51.858088 2691 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:49:51.861122 kubelet[2691]: I0913 00:49:51.859255 2691 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:49:51.870000 audit[2691]: AVC avc: denied { mac_admin } for pid=2691 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:51.872335 kubelet[2691]: I0913 00:49:51.872303 2691 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:49:51.872449 kubelet[2691]: I0913 00:49:51.872437 2691 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:49:51.872532 kubelet[2691]: I0913 00:49:51.872525 2691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:49:51.877905 kernel: audit: type=1400 audit(1757724591.870:228): avc: denied { mac_admin } for pid=2691 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:51.878029 kernel: audit: type=1401 audit(1757724591.870:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:51.870000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:51.890560 kernel: audit: type=1300 audit(1757724591.870:228): arch=c000003e syscall=188 success=no exit=-22 a0=c000830f60 a1=c000659fe0 a2=c000830f30 a3=25 items=0 ppid=1 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:51.870000 audit[2691]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000830f60 a1=c000659fe0 a2=c000830f30 a3=25 items=0 ppid=1 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:51.891219 kubelet[2691]: I0913 00:49:51.882205 2691 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:49:51.891219 kubelet[2691]: I0913 00:49:51.883589 2691 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:49:51.891219 kubelet[2691]: I0913 00:49:51.884171 2691 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:49:51.891219 kubelet[2691]: I0913 00:49:51.884272 2691 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:49:51.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:51.903829 kernel: audit: type=1327 audit(1757724591.870:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:51.903943 kernel: audit: type=1400 audit(1757724591.870:229): avc: denied { mac_admin } for pid=2691 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:51.870000 audit[2691]: AVC avc: denied { mac_admin } for pid=2691 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:51.904021 kubelet[2691]: I0913 00:49:51.899494 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:49:51.904021 kubelet[2691]: I0913 00:49:51.901997 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:49:51.904021 kubelet[2691]: I0913 00:49:51.902027 2691 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:49:51.904021 kubelet[2691]: I0913 00:49:51.902050 2691 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:49:51.904021 kubelet[2691]: E0913 00:49:51.902092 2691 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:49:51.870000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:51.914631 kernel: audit: type=1401 audit(1757724591.870:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:51.914728 kernel: audit: type=1300 audit(1757724591.870:229): arch=c000003e syscall=188 success=no exit=-22 a0=c00082e600 a1=c0008aa000 a2=c000830ff0 a3=25 items=0 ppid=1 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:51.870000 audit[2691]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00082e600 a1=c0008aa000 a2=c000830ff0 a3=25 items=0 ppid=1 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:51.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:51.923907 kernel: audit: type=1327 audit(1757724591.870:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:51.924040 kubelet[2691]: I0913 00:49:51.924024 2691 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:49:51.924199 kubelet[2691]: I0913 00:49:51.924184 2691 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:49:51.931956 kubelet[2691]: I0913 00:49:51.931935 2691 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:49:51.947235 kubelet[2691]: I0913 00:49:51.947157 2691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:49:51.947399 kubelet[2691]: I0913 00:49:51.947388 2691 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:49:52.003082 kubelet[2691]: E0913 00:49:52.003059 2691 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:49:52.029654 kubelet[2691]: I0913 00:49:52.029588 2691 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:49:52.029654 kubelet[2691]: I0913 00:49:52.029642 2691 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:49:52.029654 kubelet[2691]: I0913 00:49:52.029664 2691 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:52.029907 kubelet[2691]: I0913 00:49:52.029850 2691 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:49:52.029907 kubelet[2691]: I0913 00:49:52.029865 2691 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:49:52.030024 kubelet[2691]: I0913 00:49:52.029913 2691 policy_none.go:49] "None policy: Start" Sep 13 00:49:52.031241 kubelet[2691]: I0913 00:49:52.030753 2691 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:49:52.031241 kubelet[2691]: I0913 00:49:52.030779 2691 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:49:52.031241 kubelet[2691]: I0913 00:49:52.031068 2691 state_mem.go:75] "Updated machine memory state" Sep 13 00:49:52.034236 kubelet[2691]: I0913 00:49:52.032580 2691 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:49:52.033000 audit[2691]: AVC avc: denied { mac_admin } for pid=2691 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:49:52.033000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:49:52.033000 audit[2691]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d6f950 a1=c000d7e6f0 a2=c000d6f920 a3=25 items=0 ppid=1 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:52.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:49:52.035895 kubelet[2691]: I0913 00:49:52.035615 2691 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:49:52.035970 kubelet[2691]: I0913 00:49:52.035943 2691 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:49:52.036049 kubelet[2691]: I0913 00:49:52.035958 2691 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:49:52.040031 kubelet[2691]: I0913 00:49:52.040012 2691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:49:52.158250 kubelet[2691]: I0913 00:49:52.157722 2691 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-243" Sep 13 00:49:52.173312 kubelet[2691]: I0913 00:49:52.173285 2691 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-243" Sep 13 00:49:52.173542 kubelet[2691]: I0913 00:49:52.173532 2691 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-243" Sep 13 00:49:52.285323 kubelet[2691]: I0913 00:49:52.285282 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:52.285476 kubelet[2691]: I0913 00:49:52.285326 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:52.285476 kubelet[2691]: I0913 00:49:52.285359 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:52.285476 kubelet[2691]: I0913 00:49:52.285376 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:52.285476 kubelet[2691]: I0913 00:49:52.285400 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:52.285476 kubelet[2691]: I0913 00:49:52.285417 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0499a40e82f48cb370310dada969d3f-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-243\" (UID: \"a0499a40e82f48cb370310dada969d3f\") " pod="kube-system/kube-scheduler-ip-172-31-30-243" Sep 13 00:49:52.285619 kubelet[2691]: I0913 00:49:52.285436 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc1ec91d7ba79ac87688d829322fbf-ca-certs\") pod \"kube-apiserver-ip-172-31-30-243\" (UID: \"d6cc1ec91d7ba79ac87688d829322fbf\") " pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:52.285619 kubelet[2691]: I0913 00:49:52.285459 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:52.285619 kubelet[2691]: I0913 00:49:52.285503 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f46d22d2944f8fc600a4e65fcfb61ed6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-243\" (UID: \"f46d22d2944f8fc600a4e65fcfb61ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-243" Sep 13 00:49:52.816324 kubelet[2691]: I0913 00:49:52.816277 2691 apiserver.go:52] "Watching apiserver" Sep 13 00:49:52.890844 kubelet[2691]: I0913 00:49:52.886635 2691 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:49:53.012900 kubelet[2691]: E0913 00:49:53.011765 2691 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-243\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-243" Sep 13 00:49:53.039096 kubelet[2691]: I0913 00:49:53.039001 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-243" podStartSLOduration=1.038979184 podStartE2EDuration="1.038979184s" podCreationTimestamp="2025-09-13 00:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:53.026029502 +0000 UTC m=+1.333658455" watchObservedRunningTime="2025-09-13 00:49:53.038979184 +0000 UTC m=+1.346608139" Sep 13 00:49:53.039313 kubelet[2691]: I0913 00:49:53.039195 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-243" podStartSLOduration=1.039171414 podStartE2EDuration="1.039171414s" podCreationTimestamp="2025-09-13 00:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:53.038934145 +0000 UTC m=+1.346563099" watchObservedRunningTime="2025-09-13 00:49:53.039171414 +0000 UTC m=+1.346800368" Sep 13 00:49:53.061497 kubelet[2691]: I0913 00:49:53.061430 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-243" podStartSLOduration=1.061402681 podStartE2EDuration="1.061402681s" podCreationTimestamp="2025-09-13 00:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:53.050494069 +0000 UTC m=+1.358123010" watchObservedRunningTime="2025-09-13 00:49:53.061402681 +0000 UTC m=+1.369031631" Sep 13 00:49:55.165843 kubelet[2691]: I0913 00:49:55.165805 2691 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:49:55.166264 env[1756]: time="2025-09-13T00:49:55.166151881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:49:55.166488 kubelet[2691]: I0913 00:49:55.166415 2691 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:49:56.011317 kubelet[2691]: I0913 00:49:56.011276 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdb71d05-e1c2-4161-be92-3907a4ebac35-kube-proxy\") pod \"kube-proxy-6kmzd\" (UID: \"fdb71d05-e1c2-4161-be92-3907a4ebac35\") " pod="kube-system/kube-proxy-6kmzd" Sep 13 00:49:56.011630 kubelet[2691]: I0913 00:49:56.011609 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb71d05-e1c2-4161-be92-3907a4ebac35-xtables-lock\") pod \"kube-proxy-6kmzd\" (UID: \"fdb71d05-e1c2-4161-be92-3907a4ebac35\") " pod="kube-system/kube-proxy-6kmzd" Sep 13 00:49:56.011776 kubelet[2691]: I0913 00:49:56.011755 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb71d05-e1c2-4161-be92-3907a4ebac35-lib-modules\") pod \"kube-proxy-6kmzd\" (UID: \"fdb71d05-e1c2-4161-be92-3907a4ebac35\") " pod="kube-system/kube-proxy-6kmzd" Sep 13 00:49:56.011933 kubelet[2691]: I0913 00:49:56.011914 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbv6s\" (UniqueName: \"kubernetes.io/projected/fdb71d05-e1c2-4161-be92-3907a4ebac35-kube-api-access-hbv6s\") pod \"kube-proxy-6kmzd\" (UID: \"fdb71d05-e1c2-4161-be92-3907a4ebac35\") " pod="kube-system/kube-proxy-6kmzd" Sep 13 00:49:56.121861 kubelet[2691]: I0913 00:49:56.121821 2691 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:49:56.211619 env[1756]: time="2025-09-13T00:49:56.211574724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kmzd,Uid:fdb71d05-e1c2-4161-be92-3907a4ebac35,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:56.246474 env[1756]: time="2025-09-13T00:49:56.246381706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:56.246687 env[1756]: time="2025-09-13T00:49:56.246432124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:56.246687 env[1756]: time="2025-09-13T00:49:56.246448232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:56.246853 env[1756]: time="2025-09-13T00:49:56.246665111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a64095b9e7a3df9c7b7bbeec70d204efd12088804bd99abec892d01242ef232 pid=2741 runtime=io.containerd.runc.v2 Sep 13 00:49:56.329211 env[1756]: time="2025-09-13T00:49:56.328633707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kmzd,Uid:fdb71d05-e1c2-4161-be92-3907a4ebac35,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a64095b9e7a3df9c7b7bbeec70d204efd12088804bd99abec892d01242ef232\"" Sep 13 00:49:56.355810 env[1756]: time="2025-09-13T00:49:56.355754372Z" level=info msg="CreateContainer within sandbox \"0a64095b9e7a3df9c7b7bbeec70d204efd12088804bd99abec892d01242ef232\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:49:56.394501 env[1756]: time="2025-09-13T00:49:56.394445691Z" level=info msg="CreateContainer within sandbox \"0a64095b9e7a3df9c7b7bbeec70d204efd12088804bd99abec892d01242ef232\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ec130c13d88ddc54b8855a70f673e0fa35cd4539315a87e5613a982893e840a\"" Sep 13 00:49:56.395588 env[1756]: time="2025-09-13T00:49:56.395530431Z" level=info msg="StartContainer for \"5ec130c13d88ddc54b8855a70f673e0fa35cd4539315a87e5613a982893e840a\"" Sep 13 00:49:56.457846 env[1756]: time="2025-09-13T00:49:56.457787731Z" level=info msg="StartContainer for \"5ec130c13d88ddc54b8855a70f673e0fa35cd4539315a87e5613a982893e840a\" returns successfully" Sep 13 00:49:56.515913 kubelet[2691]: I0913 00:49:56.515771 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwbt\" (UniqueName: \"kubernetes.io/projected/18ca4f73-d8f8-4833-8a33-49c11730cffb-kube-api-access-6pwbt\") pod \"tigera-operator-58fc44c59b-pk74d\" (UID: \"18ca4f73-d8f8-4833-8a33-49c11730cffb\") " pod="tigera-operator/tigera-operator-58fc44c59b-pk74d" Sep 13 00:49:56.515913 kubelet[2691]: I0913 00:49:56.515860 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18ca4f73-d8f8-4833-8a33-49c11730cffb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-pk74d\" (UID: \"18ca4f73-d8f8-4833-8a33-49c11730cffb\") " pod="tigera-operator/tigera-operator-58fc44c59b-pk74d" Sep 13 00:49:56.644936 env[1756]: time="2025-09-13T00:49:56.644819135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pk74d,Uid:18ca4f73-d8f8-4833-8a33-49c11730cffb,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:49:56.671561 env[1756]: time="2025-09-13T00:49:56.671417571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:56.671561 env[1756]: time="2025-09-13T00:49:56.671452283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:56.671561 env[1756]: time="2025-09-13T00:49:56.671463509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:56.672060 env[1756]: time="2025-09-13T00:49:56.671887862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723 pid=2822 runtime=io.containerd.runc.v2 Sep 13 00:49:56.741266 env[1756]: time="2025-09-13T00:49:56.741218011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pk74d,Uid:18ca4f73-d8f8-4833-8a33-49c11730cffb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723\"" Sep 13 00:49:56.743372 env[1756]: time="2025-09-13T00:49:56.743339996Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:49:57.034332 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:49:57.034571 kernel: audit: type=1325 audit(1757724597.027:231): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.027000 audit[2889]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.029000 audit[2890]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.046948 kernel: audit: type=1325 audit(1757724597.029:232): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.061113 kernel: audit: type=1300 audit(1757724597.029:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9107ac40 a2=0 a3=7ffe9107ac2c items=0 ppid=2797 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.029000 audit[2890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9107ac40 a2=0 a3=7ffe9107ac2c items=0 ppid=2797 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:49:57.066976 kernel: audit: type=1327 audit(1757724597.029:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:49:57.027000 audit[2889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6a0271b0 a2=0 a3=7ffe6a02719c items=0 ppid=2797 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.082604 kernel: audit: type=1300 audit(1757724597.027:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6a0271b0 a2=0 a3=7ffe6a02719c items=0 ppid=2797 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.082736 kernel: audit: type=1327 audit(1757724597.027:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:49:57.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:49:57.055000 audit[2892]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.055000 audit[2892]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef755a7f0 a2=0 a3=7ffef755a7dc items=0 ppid=2797 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.088232 kernel: audit: type=1325 audit(1757724597.055:233): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.088324 kernel: audit: type=1300 audit(1757724597.055:233): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef755a7f0 a2=0 a3=7ffef755a7dc items=0 ppid=2797 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:49:57.095936 kernel: audit: type=1327 audit(1757724597.055:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:49:57.060000 audit[2893]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.100993 kernel: audit: type=1325 audit(1757724597.060:234): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.060000 audit[2893]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe879fd200 a2=0 a3=7ffe879fd1ec items=0 ppid=2797 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:49:57.066000 audit[2894]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.066000 audit[2894]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9183e0a0 a2=0 a3=7ffc9183e08c items=0 ppid=2797 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:49:57.076000 audit[2895]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.076000 audit[2895]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd441e0250 a2=0 a3=7ffd441e023c items=0 ppid=2797 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:49:57.161000 audit[2896]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.161000 audit[2896]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe883ccef0 a2=0 a3=7ffe883ccedc items=0 ppid=2797 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:49:57.166000 audit[2898]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.166000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe4fa85e90 a2=0 a3=7ffe4fa85e7c items=0 ppid=2797 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:49:57.171000 audit[2901]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.171000 audit[2901]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe4b061160 a2=0 a3=7ffe4b06114c items=0 ppid=2797 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:49:57.173000 audit[2902]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.173000 audit[2902]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5eea190 a2=0 a3=7ffdc5eea17c items=0 ppid=2797 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:49:57.176000 audit[2904]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.176000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb6c48b60 a2=0 a3=7ffdb6c48b4c items=0 ppid=2797 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:49:57.178000 audit[2905]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.178000 audit[2905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3a698bf0 a2=0 a3=7fff3a698bdc items=0 ppid=2797 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:49:57.181000 audit[2907]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.181000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1bcb1e90 a2=0 a3=7ffc1bcb1e7c items=0 ppid=2797 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:49:57.185000 audit[2910]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.185000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff3b10c9c0 a2=0 a3=7fff3b10c9ac items=0 ppid=2797 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:49:57.187000 audit[2911]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.187000 audit[2911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd84f8c440 a2=0 a3=7ffd84f8c42c items=0 ppid=2797 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:49:57.190000 audit[2913]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.190000 audit[2913]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde5bb4830 a2=0 a3=7ffde5bb481c items=0 ppid=2797 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:49:57.191000 audit[2914]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.191000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8cf66660 a2=0 a3=7ffc8cf6664c items=0 ppid=2797 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:49:57.194000 audit[2916]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.194000 audit[2916]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9f720340 a2=0 a3=7fff9f72032c items=0 ppid=2797 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:49:57.198000 audit[2919]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.198000 audit[2919]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0cdb5fd0 a2=0 a3=7fff0cdb5fbc items=0 ppid=2797 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:49:57.203000 audit[2922]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.203000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb4101f60 a2=0 a3=7ffcb4101f4c items=0 ppid=2797 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:49:57.204000 audit[2923]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.204000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb5475020 a2=0 a3=7ffdb547500c items=0 ppid=2797 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:49:57.209000 audit[2925]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.209000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffde7ac8b20 a2=0 a3=7ffde7ac8b0c items=0 ppid=2797 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:49:57.214000 audit[2928]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.214000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc70e40b0 a2=0 a3=7ffdc70e409c items=0 ppid=2797 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:49:57.215000 audit[2929]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.215000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5993d590 a2=0 a3=7ffd5993d57c items=0 ppid=2797 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:49:57.218000 audit[2931]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:49:57.218000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcd9259200 a2=0 a3=7ffcd92591ec items=0 ppid=2797 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:49:57.262000 audit[2937]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:49:57.262000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefda59630 a2=0 a3=7ffefda5961c items=0 ppid=2797 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:49:57.271000 audit[2937]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:49:57.271000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffefda59630 a2=0 a3=7ffefda5961c items=0 ppid=2797 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:49:57.273000 audit[2942]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.273000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe8d736030 a2=0 a3=7ffe8d73601c items=0 ppid=2797 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:49:57.277000 audit[2944]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.277000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcd1238d80 a2=0 a3=7ffcd1238d6c items=0 ppid=2797 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:49:57.283000 audit[2947]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.283000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffce0f796a0 a2=0 a3=7ffce0f7968c items=0 ppid=2797 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:49:57.285000 audit[2948]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.285000 audit[2948]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0cab53a0 a2=0 a3=7ffe0cab538c items=0 ppid=2797 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:49:57.290000 audit[2950]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.290000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf462b280 a2=0 a3=7ffdf462b26c items=0 ppid=2797 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:49:57.291000 audit[2951]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.291000 audit[2951]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9b76d6e0 a2=0 a3=7ffc9b76d6cc items=0 ppid=2797 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.291000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:49:57.295000 audit[2953]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.295000 audit[2953]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb79e5110 a2=0 a3=7fffb79e50fc items=0 ppid=2797 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:49:57.300000 audit[2956]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.300000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc01f05570 a2=0 a3=7ffc01f0555c items=0 ppid=2797 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:49:57.301000 audit[2957]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.301000 audit[2957]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedfb52100 a2=0 a3=7ffedfb520ec items=0 ppid=2797 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:49:57.304000 audit[2959]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.304000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd389bf7f0 a2=0 a3=7ffd389bf7dc items=0 ppid=2797 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:49:57.306000 audit[2960]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.306000 audit[2960]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0b5c9660 a2=0 a3=7ffd0b5c964c items=0 ppid=2797 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:49:57.309000 audit[2962]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.309000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff69567180 a2=0 a3=7fff6956716c items=0 ppid=2797 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:49:57.314000 audit[2965]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.314000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe22702620 a2=0 a3=7ffe2270260c items=0 ppid=2797 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:49:57.318000 audit[2968]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.318000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc12217db0 a2=0 a3=7ffc12217d9c items=0 ppid=2797 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:49:57.320000 audit[2969]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.320000 audit[2969]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6048a900 a2=0 a3=7ffe6048a8ec items=0 ppid=2797 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:49:57.323000 audit[2971]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.323000 audit[2971]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdb7a47e90 a2=0 a3=7ffdb7a47e7c items=0 ppid=2797 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:49:57.327000 audit[2974]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.327000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcf5ef4d80 a2=0 a3=7ffcf5ef4d6c items=0 ppid=2797 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:49:57.328000 audit[2975]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.328000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc543e5940 a2=0 a3=7ffc543e592c items=0 ppid=2797 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:49:57.331000 audit[2977]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.331000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffee9f8ea0 a2=0 a3=7fffee9f8e8c items=0 ppid=2797 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:49:57.333000 audit[2978]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.333000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd84cf6750 a2=0 a3=7ffd84cf673c items=0 ppid=2797 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:49:57.336000 audit[2980]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.336000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7c65ebb0 a2=0 a3=7ffc7c65eb9c items=0 ppid=2797 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:49:57.341000 audit[2983]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:49:57.341000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe96bc1080 a2=0 a3=7ffe96bc106c items=0 ppid=2797 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:49:57.344000 audit[2985]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:49:57.344000 audit[2985]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc7f89ece0 a2=0 a3=7ffc7f89eccc items=0 ppid=2797 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.344000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:49:57.345000 audit[2985]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:49:57.345000 audit[2985]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc7f89ece0 a2=0 a3=7ffc7f89eccc items=0 ppid=2797 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:57.345000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:49:58.048746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298079122.mount: Deactivated successfully. Sep 13 00:49:58.694000 kubelet[2691]: I0913 00:49:58.693924 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6kmzd" podStartSLOduration=3.693902821 podStartE2EDuration="3.693902821s" podCreationTimestamp="2025-09-13 00:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:57.027096457 +0000 UTC m=+5.334725402" watchObservedRunningTime="2025-09-13 00:49:58.693902821 +0000 UTC m=+7.001531773" Sep 13 00:49:59.081351 env[1756]: time="2025-09-13T00:49:59.081207046Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:59.083853 env[1756]: time="2025-09-13T00:49:59.083809991Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:59.085581 env[1756]: time="2025-09-13T00:49:59.085536805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:59.087354 env[1756]: time="2025-09-13T00:49:59.087293035Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:59.088288 env[1756]: time="2025-09-13T00:49:59.088242713Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:49:59.093331 env[1756]: time="2025-09-13T00:49:59.093286168Z" level=info msg="CreateContainer within sandbox \"bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:49:59.108609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487762569.mount: Deactivated successfully. Sep 13 00:49:59.115718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412387970.mount: Deactivated successfully. Sep 13 00:49:59.122071 env[1756]: time="2025-09-13T00:49:59.121946093Z" level=info msg="CreateContainer within sandbox \"bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d\"" Sep 13 00:49:59.124472 env[1756]: time="2025-09-13T00:49:59.124303841Z" level=info msg="StartContainer for \"afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d\"" Sep 13 00:49:59.167062 update_engine[1742]: I0913 00:49:59.167023 1742 update_attempter.cc:509] Updating boot flags... Sep 13 00:49:59.198901 env[1756]: time="2025-09-13T00:49:59.196957115Z" level=info msg="StartContainer for \"afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d\" returns successfully" Sep 13 00:50:00.146628 kubelet[2691]: I0913 00:50:00.146234 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-pk74d" podStartSLOduration=1.799136776 podStartE2EDuration="4.146214248s" podCreationTimestamp="2025-09-13 00:49:56 +0000 UTC" firstStartedPulling="2025-09-13 00:49:56.74262641 +0000 UTC m=+5.050255340" lastFinishedPulling="2025-09-13 00:49:59.089703869 +0000 UTC m=+7.397332812" observedRunningTime="2025-09-13 00:50:00.145511331 +0000 UTC m=+8.453140286" watchObservedRunningTime="2025-09-13 00:50:00.146214248 +0000 UTC m=+8.453843200" Sep 13 00:50:09.184313 amazon-ssm-agent[1807]: 2025-09-13 00:50:09 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:50:09.534200 sudo[2052]: pam_unix(sudo:session): session closed for user root Sep 13 00:50:09.546252 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:50:09.546421 kernel: audit: type=1106 audit(1757724609.533:282): pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:50:09.533000 audit[2052]: USER_END pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:50:09.533000 audit[2052]: CRED_DISP pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:50:09.563908 kernel: audit: type=1104 audit(1757724609.533:283): pid=2052 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:50:09.577484 sshd[2048]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:09.594198 kernel: audit: type=1106 audit(1757724609.582:284): pid=2048 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:09.582000 audit[2048]: USER_END pid=2048 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:09.596624 systemd[1]: sshd@6-172.31.30.243:22-147.75.109.163:34830.service: Deactivated successfully. Sep 13 00:50:09.597766 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:50:09.600198 systemd-logind[1741]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:50:09.602136 systemd-logind[1741]: Removed session 7. Sep 13 00:50:09.593000 audit[2048]: CRED_DISP pid=2048 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:09.617900 kernel: audit: type=1104 audit(1757724609.593:285): pid=2048 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:09.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.243:22-147.75.109.163:34830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:09.639902 kernel: audit: type=1131 audit(1757724609.596:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.243:22-147.75.109.163:34830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:11.310000 audit[3251]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.328092 kernel: audit: type=1325 audit(1757724611.310:287): table=filter:89 family=2 entries=14 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.328236 kernel: audit: type=1300 audit(1757724611.310:287): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc5c09cb0 a2=0 a3=7ffcc5c09c9c items=0 ppid=2797 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.310000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc5c09cb0 a2=0 a3=7ffcc5c09c9c items=0 ppid=2797 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:11.350893 kernel: audit: type=1327 audit(1757724611.310:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:11.344000 audit[3251]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.377720 kernel: audit: type=1325 audit(1757724611.344:288): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.377865 kernel: audit: type=1300 audit(1757724611.344:288): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc5c09cb0 a2=0 a3=0 items=0 ppid=2797 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.344000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc5c09cb0 a2=0 a3=0 items=0 ppid=2797 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:11.385000 audit[3253]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.385000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffde19fcc00 a2=0 a3=7ffde19fcbec items=0 ppid=2797 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:11.390000 audit[3253]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:11.390000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde19fcc00 a2=0 a3=0 items=0 ppid=2797 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:11.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:13.836000 audit[3256]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:13.836000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe12ff6820 a2=0 a3=7ffe12ff680c items=0 ppid=2797 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:13.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:13.842000 audit[3256]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:13.842000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe12ff6820 a2=0 a3=0 items=0 ppid=2797 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:13.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:13.865000 audit[3258]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:13.865000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffbc1340d0 a2=0 a3=7fffbc1340bc items=0 ppid=2797 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:13.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:13.885000 audit[3258]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:13.885000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbc1340d0 a2=0 a3=0 items=0 ppid=2797 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:13.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:14.169403 kubelet[2691]: I0913 00:50:14.169245 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414d165c-4c1a-4608-914a-1833987ba4fc-tigera-ca-bundle\") pod \"calico-typha-5f986965f4-786pc\" (UID: \"414d165c-4c1a-4608-914a-1833987ba4fc\") " pod="calico-system/calico-typha-5f986965f4-786pc" Sep 13 00:50:14.169403 kubelet[2691]: I0913 00:50:14.169295 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/414d165c-4c1a-4608-914a-1833987ba4fc-typha-certs\") pod \"calico-typha-5f986965f4-786pc\" (UID: \"414d165c-4c1a-4608-914a-1833987ba4fc\") " pod="calico-system/calico-typha-5f986965f4-786pc" Sep 13 00:50:14.169403 kubelet[2691]: I0913 00:50:14.169317 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slnbj\" (UniqueName: \"kubernetes.io/projected/414d165c-4c1a-4608-914a-1833987ba4fc-kube-api-access-slnbj\") pod \"calico-typha-5f986965f4-786pc\" (UID: \"414d165c-4c1a-4608-914a-1833987ba4fc\") " pod="calico-system/calico-typha-5f986965f4-786pc" Sep 13 00:50:14.472564 kubelet[2691]: I0913 00:50:14.472449 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-policysync\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472564 kubelet[2691]: I0913 00:50:14.472516 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-var-run-calico\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472564 kubelet[2691]: I0913 00:50:14.472544 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e263a73e-3d91-4700-9553-fd439fae9417-tigera-ca-bundle\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472835 kubelet[2691]: I0913 00:50:14.472589 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-flexvol-driver-host\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472835 kubelet[2691]: I0913 00:50:14.472622 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-cni-bin-dir\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472835 kubelet[2691]: I0913 00:50:14.472670 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-var-lib-calico\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472835 kubelet[2691]: I0913 00:50:14.472692 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-xtables-lock\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.472835 kubelet[2691]: I0913 00:50:14.472715 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mld24\" (UniqueName: \"kubernetes.io/projected/e263a73e-3d91-4700-9553-fd439fae9417-kube-api-access-mld24\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.473072 kubelet[2691]: I0913 00:50:14.472759 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-cni-net-dir\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.473072 kubelet[2691]: I0913 00:50:14.472785 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-cni-log-dir\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.473072 kubelet[2691]: I0913 00:50:14.472827 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e263a73e-3d91-4700-9553-fd439fae9417-lib-modules\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.473072 kubelet[2691]: I0913 00:50:14.472851 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e263a73e-3d91-4700-9553-fd439fae9417-node-certs\") pod \"calico-node-ddf5z\" (UID: \"e263a73e-3d91-4700-9553-fd439fae9417\") " pod="calico-system/calico-node-ddf5z" Sep 13 00:50:14.576741 kubelet[2691]: E0913 00:50:14.576693 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.577006 kubelet[2691]: W0913 00:50:14.576979 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.577174 kubelet[2691]: E0913 00:50:14.577158 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.577669 kubelet[2691]: E0913 00:50:14.577643 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.577788 kubelet[2691]: W0913 00:50:14.577773 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.577907 kubelet[2691]: E0913 00:50:14.577894 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.578546 kubelet[2691]: E0913 00:50:14.578531 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.578666 kubelet[2691]: W0913 00:50:14.578652 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.578775 kubelet[2691]: E0913 00:50:14.578762 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.579141 kubelet[2691]: E0913 00:50:14.579128 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.579272 kubelet[2691]: W0913 00:50:14.579258 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.579374 kubelet[2691]: E0913 00:50:14.579359 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.590435 kubelet[2691]: E0913 00:50:14.590177 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.590435 kubelet[2691]: W0913 00:50:14.590218 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.590435 kubelet[2691]: E0913 00:50:14.590245 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.591337 env[1756]: time="2025-09-13T00:50:14.591265170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f986965f4-786pc,Uid:414d165c-4c1a-4608-914a-1833987ba4fc,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:14.592309 kubelet[2691]: E0913 00:50:14.592282 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.592309 kubelet[2691]: W0913 00:50:14.592308 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.592456 kubelet[2691]: E0913 00:50:14.592329 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.610194 kubelet[2691]: E0913 00:50:14.610163 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.610390 kubelet[2691]: W0913 00:50:14.610192 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.610390 kubelet[2691]: E0913 00:50:14.610243 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.635145 env[1756]: time="2025-09-13T00:50:14.634180060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:14.635145 env[1756]: time="2025-09-13T00:50:14.634220610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:14.635145 env[1756]: time="2025-09-13T00:50:14.634244676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:14.635145 env[1756]: time="2025-09-13T00:50:14.634415695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e01aaf28887241b86d4e83c9bcee07cf63869db386137fc4b6bb8b31796145b pid=3277 runtime=io.containerd.runc.v2 Sep 13 00:50:14.658192 kubelet[2691]: E0913 00:50:14.658129 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:14.660577 env[1756]: time="2025-09-13T00:50:14.660538990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddf5z,Uid:e263a73e-3d91-4700-9553-fd439fae9417,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:14.679894 kubelet[2691]: E0913 00:50:14.679843 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.679894 kubelet[2691]: W0913 00:50:14.679867 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680060 kubelet[2691]: E0913 00:50:14.679914 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680128 kubelet[2691]: E0913 00:50:14.680114 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680128 kubelet[2691]: W0913 00:50:14.680128 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680211 kubelet[2691]: E0913 00:50:14.680136 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680283 kubelet[2691]: E0913 00:50:14.680272 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680283 kubelet[2691]: W0913 00:50:14.680283 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680352 kubelet[2691]: E0913 00:50:14.680290 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680434 kubelet[2691]: E0913 00:50:14.680423 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680434 kubelet[2691]: W0913 00:50:14.680433 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680509 kubelet[2691]: E0913 00:50:14.680441 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680592 kubelet[2691]: E0913 00:50:14.680581 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680592 kubelet[2691]: W0913 00:50:14.680592 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680659 kubelet[2691]: E0913 00:50:14.680599 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680736 kubelet[2691]: E0913 00:50:14.680725 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680736 kubelet[2691]: W0913 00:50:14.680735 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680806 kubelet[2691]: E0913 00:50:14.680742 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.680886 kubelet[2691]: E0913 00:50:14.680866 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.680930 kubelet[2691]: W0913 00:50:14.680888 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.680930 kubelet[2691]: E0913 00:50:14.680895 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681041 kubelet[2691]: E0913 00:50:14.681030 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681041 kubelet[2691]: W0913 00:50:14.681040 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681109 kubelet[2691]: E0913 00:50:14.681048 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681193 kubelet[2691]: E0913 00:50:14.681182 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681193 kubelet[2691]: W0913 00:50:14.681193 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681263 kubelet[2691]: E0913 00:50:14.681200 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681402 kubelet[2691]: E0913 00:50:14.681388 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681402 kubelet[2691]: W0913 00:50:14.681400 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681483 kubelet[2691]: E0913 00:50:14.681409 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681553 kubelet[2691]: E0913 00:50:14.681541 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681553 kubelet[2691]: W0913 00:50:14.681553 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681623 kubelet[2691]: E0913 00:50:14.681559 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681733 kubelet[2691]: E0913 00:50:14.681688 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681733 kubelet[2691]: W0913 00:50:14.681696 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681733 kubelet[2691]: E0913 00:50:14.681702 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.681903 kubelet[2691]: E0913 00:50:14.681834 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.681903 kubelet[2691]: W0913 00:50:14.681842 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.681903 kubelet[2691]: E0913 00:50:14.681848 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.682109 kubelet[2691]: E0913 00:50:14.682095 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.682109 kubelet[2691]: W0913 00:50:14.682105 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.682173 kubelet[2691]: E0913 00:50:14.682115 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.682287 kubelet[2691]: E0913 00:50:14.682247 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.682287 kubelet[2691]: W0913 00:50:14.682255 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.682287 kubelet[2691]: E0913 00:50:14.682263 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.682466 kubelet[2691]: E0913 00:50:14.682392 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.682466 kubelet[2691]: W0913 00:50:14.682401 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.682466 kubelet[2691]: E0913 00:50:14.682407 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.682566 kubelet[2691]: E0913 00:50:14.682547 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.682566 kubelet[2691]: W0913 00:50:14.682552 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.682566 kubelet[2691]: E0913 00:50:14.682559 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.682685 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.684781 kubelet[2691]: W0913 00:50:14.682693 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.682699 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.682822 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.684781 kubelet[2691]: W0913 00:50:14.682827 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.682833 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.683003 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.684781 kubelet[2691]: W0913 00:50:14.683009 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.684781 kubelet[2691]: E0913 00:50:14.683016 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.701309 env[1756]: time="2025-09-13T00:50:14.701219535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:14.701560 env[1756]: time="2025-09-13T00:50:14.701520229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:14.701688 env[1756]: time="2025-09-13T00:50:14.701661109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:14.702091 env[1756]: time="2025-09-13T00:50:14.702054195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5 pid=3340 runtime=io.containerd.runc.v2 Sep 13 00:50:14.772950 env[1756]: time="2025-09-13T00:50:14.769515817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f986965f4-786pc,Uid:414d165c-4c1a-4608-914a-1833987ba4fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e01aaf28887241b86d4e83c9bcee07cf63869db386137fc4b6bb8b31796145b\"" Sep 13 00:50:14.775907 kubelet[2691]: E0913 00:50:14.774944 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.775907 kubelet[2691]: W0913 00:50:14.774966 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.775907 kubelet[2691]: E0913 00:50:14.774989 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.775907 kubelet[2691]: I0913 00:50:14.775066 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/543e9814-38b9-4890-8c16-f362d4a3151e-registration-dir\") pod \"csi-node-driver-dbdlb\" (UID: \"543e9814-38b9-4890-8c16-f362d4a3151e\") " pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:14.775907 kubelet[2691]: E0913 00:50:14.775805 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.775907 kubelet[2691]: W0913 00:50:14.775843 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.775907 kubelet[2691]: E0913 00:50:14.775899 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.776362 kubelet[2691]: I0913 00:50:14.775927 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbshd\" (UniqueName: \"kubernetes.io/projected/543e9814-38b9-4890-8c16-f362d4a3151e-kube-api-access-vbshd\") pod \"csi-node-driver-dbdlb\" (UID: \"543e9814-38b9-4890-8c16-f362d4a3151e\") " pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:14.776362 kubelet[2691]: E0913 00:50:14.776218 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.776362 kubelet[2691]: W0913 00:50:14.776230 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.776362 kubelet[2691]: E0913 00:50:14.776249 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.776543 kubelet[2691]: E0913 00:50:14.776499 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.776543 kubelet[2691]: W0913 00:50:14.776511 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.776543 kubelet[2691]: E0913 00:50:14.776529 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.777188 kubelet[2691]: E0913 00:50:14.777147 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.777188 kubelet[2691]: W0913 00:50:14.777164 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.777188 kubelet[2691]: E0913 00:50:14.777183 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.777398 kubelet[2691]: I0913 00:50:14.777208 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/543e9814-38b9-4890-8c16-f362d4a3151e-varrun\") pod \"csi-node-driver-dbdlb\" (UID: \"543e9814-38b9-4890-8c16-f362d4a3151e\") " pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:14.778445 kubelet[2691]: E0913 00:50:14.777467 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.778445 kubelet[2691]: W0913 00:50:14.777482 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.778445 kubelet[2691]: E0913 00:50:14.777533 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.778445 kubelet[2691]: I0913 00:50:14.777562 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/543e9814-38b9-4890-8c16-f362d4a3151e-socket-dir\") pod \"csi-node-driver-dbdlb\" (UID: \"543e9814-38b9-4890-8c16-f362d4a3151e\") " pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:14.778445 kubelet[2691]: E0913 00:50:14.777852 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.778445 kubelet[2691]: W0913 00:50:14.777862 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.778445 kubelet[2691]: E0913 00:50:14.777978 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.778445 kubelet[2691]: E0913 00:50:14.778208 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.778445 kubelet[2691]: W0913 00:50:14.778222 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.779481 kubelet[2691]: E0913 00:50:14.778239 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.779481 kubelet[2691]: E0913 00:50:14.778496 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.779481 kubelet[2691]: W0913 00:50:14.778507 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.779481 kubelet[2691]: E0913 00:50:14.778525 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.779481 kubelet[2691]: I0913 00:50:14.778549 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/543e9814-38b9-4890-8c16-f362d4a3151e-kubelet-dir\") pod \"csi-node-driver-dbdlb\" (UID: \"543e9814-38b9-4890-8c16-f362d4a3151e\") " pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:14.779481 kubelet[2691]: E0913 00:50:14.778993 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.779481 kubelet[2691]: W0913 00:50:14.779007 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.779481 kubelet[2691]: E0913 00:50:14.779051 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.779850 kubelet[2691]: E0913 00:50:14.779779 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.779850 kubelet[2691]: W0913 00:50:14.779792 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.779850 kubelet[2691]: E0913 00:50:14.779806 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.780162 kubelet[2691]: E0913 00:50:14.780127 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.780162 kubelet[2691]: W0913 00:50:14.780143 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.780162 kubelet[2691]: E0913 00:50:14.780161 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.780374 kubelet[2691]: E0913 00:50:14.780359 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.780443 kubelet[2691]: W0913 00:50:14.780374 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.780443 kubelet[2691]: E0913 00:50:14.780387 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.780718 kubelet[2691]: E0913 00:50:14.780678 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.780718 kubelet[2691]: W0913 00:50:14.780694 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.780718 kubelet[2691]: E0913 00:50:14.780707 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.781107 kubelet[2691]: E0913 00:50:14.781093 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.781188 kubelet[2691]: W0913 00:50:14.781106 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.781188 kubelet[2691]: E0913 00:50:14.781121 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.784660 env[1756]: time="2025-09-13T00:50:14.783560889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:50:14.808337 env[1756]: time="2025-09-13T00:50:14.808290866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddf5z,Uid:e263a73e-3d91-4700-9553-fd439fae9417,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\"" Sep 13 00:50:14.881381 kubelet[2691]: E0913 00:50:14.881342 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.881381 kubelet[2691]: W0913 00:50:14.881370 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.881610 kubelet[2691]: E0913 00:50:14.881395 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.881852 kubelet[2691]: E0913 00:50:14.881834 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.882091 kubelet[2691]: W0913 00:50:14.881852 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.882091 kubelet[2691]: E0913 00:50:14.881885 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.882404 kubelet[2691]: E0913 00:50:14.882389 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.882489 kubelet[2691]: W0913 00:50:14.882405 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.882489 kubelet[2691]: E0913 00:50:14.882427 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.882765 kubelet[2691]: E0913 00:50:14.882750 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.882854 kubelet[2691]: W0913 00:50:14.882767 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.882854 kubelet[2691]: E0913 00:50:14.882784 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.883144 kubelet[2691]: E0913 00:50:14.883119 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.883236 kubelet[2691]: W0913 00:50:14.883135 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.883285 kubelet[2691]: E0913 00:50:14.883243 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.883494 kubelet[2691]: E0913 00:50:14.883481 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.883574 kubelet[2691]: W0913 00:50:14.883495 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.883635 kubelet[2691]: E0913 00:50:14.883590 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.883849 kubelet[2691]: E0913 00:50:14.883823 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.883979 kubelet[2691]: W0913 00:50:14.883848 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.883979 kubelet[2691]: E0913 00:50:14.883953 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884265 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.885676 kubelet[2691]: W0913 00:50:14.884278 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884339 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884560 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.885676 kubelet[2691]: W0913 00:50:14.884569 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884650 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884806 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.885676 kubelet[2691]: W0913 00:50:14.884816 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.884927 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.885676 kubelet[2691]: E0913 00:50:14.885084 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.886153 kubelet[2691]: W0913 00:50:14.885093 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.886153 kubelet[2691]: E0913 00:50:14.885180 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.886153 kubelet[2691]: E0913 00:50:14.885321 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.886153 kubelet[2691]: W0913 00:50:14.885338 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.886153 kubelet[2691]: E0913 00:50:14.885422 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.886153 kubelet[2691]: E0913 00:50:14.885557 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.886153 kubelet[2691]: W0913 00:50:14.885566 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.886153 kubelet[2691]: E0913 00:50:14.885580 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.886564 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888393 kubelet[2691]: W0913 00:50:14.886577 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.886667 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.886832 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888393 kubelet[2691]: W0913 00:50:14.886842 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.887221 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.887423 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888393 kubelet[2691]: W0913 00:50:14.887433 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.887516 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888393 kubelet[2691]: E0913 00:50:14.887646 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888827 kubelet[2691]: W0913 00:50:14.887658 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888827 kubelet[2691]: E0913 00:50:14.887738 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888827 kubelet[2691]: E0913 00:50:14.887903 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888827 kubelet[2691]: W0913 00:50:14.887913 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888827 kubelet[2691]: E0913 00:50:14.888038 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.888827 kubelet[2691]: E0913 00:50:14.888194 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.888827 kubelet[2691]: W0913 00:50:14.888203 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.888827 kubelet[2691]: E0913 00:50:14.888219 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.889659 kubelet[2691]: E0913 00:50:14.888955 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.889659 kubelet[2691]: W0913 00:50:14.888966 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.889659 kubelet[2691]: E0913 00:50:14.888984 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.889659 kubelet[2691]: E0913 00:50:14.889269 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.889659 kubelet[2691]: W0913 00:50:14.889280 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.890469 kubelet[2691]: E0913 00:50:14.889916 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.890469 kubelet[2691]: E0913 00:50:14.890087 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.890469 kubelet[2691]: W0913 00:50:14.890097 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.890469 kubelet[2691]: E0913 00:50:14.890217 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.890469 kubelet[2691]: E0913 00:50:14.890355 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.890469 kubelet[2691]: W0913 00:50:14.890366 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.891064 kubelet[2691]: E0913 00:50:14.890761 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.891064 kubelet[2691]: E0913 00:50:14.890950 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.891064 kubelet[2691]: W0913 00:50:14.890961 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.891300 kubelet[2691]: E0913 00:50:14.891268 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.891493 kubelet[2691]: E0913 00:50:14.891483 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.891571 kubelet[2691]: W0913 00:50:14.891560 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.891929 kubelet[2691]: E0913 00:50:14.891629 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.904980 kubelet[2691]: E0913 00:50:14.904951 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:14.905189 kubelet[2691]: W0913 00:50:14.905168 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:14.905313 kubelet[2691]: E0913 00:50:14.905298 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:14.914000 audit[3423]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:14.917646 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:50:14.917749 kernel: audit: type=1325 audit(1757724614.914:295): table=filter:97 family=2 entries=20 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:14.933819 kernel: audit: type=1300 audit(1757724614.914:295): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9197c820 a2=0 a3=7ffd9197c80c items=0 ppid=2797 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:14.914000 audit[3423]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9197c820 a2=0 a3=7ffd9197c80c items=0 ppid=2797 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:14.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:14.939912 kernel: audit: type=1327 audit(1757724614.914:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:14.941000 audit[3423]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:14.941000 audit[3423]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9197c820 a2=0 a3=0 items=0 ppid=2797 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:14.957409 kernel: audit: type=1325 audit(1757724614.941:296): table=nat:98 family=2 entries=12 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:14.957549 kernel: audit: type=1300 audit(1757724614.941:296): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9197c820 a2=0 a3=0 items=0 ppid=2797 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:14.963229 kernel: audit: type=1327 audit(1757724614.941:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:14.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:15.904356 kubelet[2691]: E0913 00:50:15.903081 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:16.085107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074177338.mount: Deactivated successfully. Sep 13 00:50:17.498911 env[1756]: time="2025-09-13T00:50:17.498849559Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:17.513812 env[1756]: time="2025-09-13T00:50:17.513732910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:17.523600 env[1756]: time="2025-09-13T00:50:17.523556292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:17.527114 env[1756]: time="2025-09-13T00:50:17.527072330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:50:17.528084 env[1756]: time="2025-09-13T00:50:17.526228055Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:17.542503 env[1756]: time="2025-09-13T00:50:17.532184614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:50:17.557904 env[1756]: time="2025-09-13T00:50:17.557464187Z" level=info msg="CreateContainer within sandbox \"3e01aaf28887241b86d4e83c9bcee07cf63869db386137fc4b6bb8b31796145b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:50:17.581867 env[1756]: time="2025-09-13T00:50:17.581811322Z" level=info msg="CreateContainer within sandbox \"3e01aaf28887241b86d4e83c9bcee07cf63869db386137fc4b6bb8b31796145b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ab3b724f30c78b2e9f60e6098138d07373877582aaf58787ae75bbc1132d8e1\"" Sep 13 00:50:17.582974 env[1756]: time="2025-09-13T00:50:17.582935171Z" level=info msg="StartContainer for \"2ab3b724f30c78b2e9f60e6098138d07373877582aaf58787ae75bbc1132d8e1\"" Sep 13 00:50:17.706125 env[1756]: time="2025-09-13T00:50:17.706070154Z" level=info msg="StartContainer for \"2ab3b724f30c78b2e9f60e6098138d07373877582aaf58787ae75bbc1132d8e1\" returns successfully" Sep 13 00:50:17.905281 kubelet[2691]: E0913 00:50:17.904340 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:18.174548 kubelet[2691]: I0913 00:50:18.174473 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f986965f4-786pc" podStartSLOduration=2.426211794 podStartE2EDuration="5.174454313s" podCreationTimestamp="2025-09-13 00:50:13 +0000 UTC" firstStartedPulling="2025-09-13 00:50:14.782819923 +0000 UTC m=+23.090448856" lastFinishedPulling="2025-09-13 00:50:17.531062425 +0000 UTC m=+25.838691375" observedRunningTime="2025-09-13 00:50:18.155143074 +0000 UTC m=+26.462772025" watchObservedRunningTime="2025-09-13 00:50:18.174454313 +0000 UTC m=+26.482083263" Sep 13 00:50:18.200000 audit[3470]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:18.205894 kernel: audit: type=1325 audit(1757724618.200:297): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:18.200000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda72a9c10 a2=0 a3=7ffda72a9bfc items=0 ppid=2797 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:18.219882 kernel: audit: type=1300 audit(1757724618.200:297): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda72a9c10 a2=0 a3=7ffda72a9bfc items=0 ppid=2797 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:18.220423 kubelet[2691]: E0913 00:50:18.220399 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.220560 kubelet[2691]: W0913 00:50:18.220545 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.220641 kubelet[2691]: E0913 00:50:18.220630 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.220906 kubelet[2691]: E0913 00:50:18.220897 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.220991 kubelet[2691]: W0913 00:50:18.220982 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.221058 kubelet[2691]: E0913 00:50:18.221039 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.221285 kubelet[2691]: E0913 00:50:18.221276 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.221361 kubelet[2691]: W0913 00:50:18.221352 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.221423 kubelet[2691]: E0913 00:50:18.221408 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.221713 kubelet[2691]: E0913 00:50:18.221699 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.221804 kubelet[2691]: W0913 00:50:18.221794 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.221866 kubelet[2691]: E0913 00:50:18.221851 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.222130 kubelet[2691]: E0913 00:50:18.222122 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.222210 kubelet[2691]: W0913 00:50:18.222201 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.222273 kubelet[2691]: E0913 00:50:18.222258 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.222514 kubelet[2691]: E0913 00:50:18.222505 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.222598 kubelet[2691]: W0913 00:50:18.222589 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.222659 kubelet[2691]: E0913 00:50:18.222643 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.222891 kubelet[2691]: E0913 00:50:18.222883 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.223008 kubelet[2691]: W0913 00:50:18.222999 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.223070 kubelet[2691]: E0913 00:50:18.223055 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.223293 kubelet[2691]: E0913 00:50:18.223286 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.223372 kubelet[2691]: W0913 00:50:18.223363 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.223420 kubelet[2691]: E0913 00:50:18.223413 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.223656 kubelet[2691]: E0913 00:50:18.223648 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.223736 kubelet[2691]: W0913 00:50:18.223727 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.223786 kubelet[2691]: E0913 00:50:18.223778 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.224034 kubelet[2691]: E0913 00:50:18.224026 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.224113 kubelet[2691]: W0913 00:50:18.224104 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.224174 kubelet[2691]: E0913 00:50:18.224159 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.224390 kubelet[2691]: E0913 00:50:18.224383 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.224465 kubelet[2691]: W0913 00:50:18.224455 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.224518 kubelet[2691]: E0913 00:50:18.224510 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.224716 kubelet[2691]: E0913 00:50:18.224709 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.224781 kubelet[2691]: W0913 00:50:18.224773 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.224838 kubelet[2691]: E0913 00:50:18.224822 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.225054 kubelet[2691]: E0913 00:50:18.225047 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.225117 kubelet[2691]: W0913 00:50:18.225110 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.225166 kubelet[2691]: E0913 00:50:18.225159 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.225364 kubelet[2691]: E0913 00:50:18.225357 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.225429 kubelet[2691]: W0913 00:50:18.225422 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.225478 kubelet[2691]: E0913 00:50:18.225471 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.225661 kubelet[2691]: E0913 00:50:18.225655 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.225714 kubelet[2691]: W0913 00:50:18.225707 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.225775 kubelet[2691]: E0913 00:50:18.225767 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:18.231696 kernel: audit: type=1327 audit(1757724618.200:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:18.253000 audit[3470]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:18.258966 kernel: audit: type=1325 audit(1757724618.253:298): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:18.253000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffda72a9c10 a2=0 a3=7ffda72a9bfc items=0 ppid=2797 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:18.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:18.320851 kubelet[2691]: E0913 00:50:18.320826 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.321094 kubelet[2691]: W0913 00:50:18.321078 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.321184 kubelet[2691]: E0913 00:50:18.321174 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.321535 kubelet[2691]: E0913 00:50:18.321522 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.321628 kubelet[2691]: W0913 00:50:18.321618 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.321701 kubelet[2691]: E0913 00:50:18.321692 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.322009 kubelet[2691]: E0913 00:50:18.321999 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.322101 kubelet[2691]: W0913 00:50:18.322091 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.322175 kubelet[2691]: E0913 00:50:18.322166 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.322413 kubelet[2691]: E0913 00:50:18.322404 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.322492 kubelet[2691]: W0913 00:50:18.322482 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.322557 kubelet[2691]: E0913 00:50:18.322549 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.322821 kubelet[2691]: E0913 00:50:18.322811 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.322913 kubelet[2691]: W0913 00:50:18.322903 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.322994 kubelet[2691]: E0913 00:50:18.322986 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.323232 kubelet[2691]: E0913 00:50:18.323223 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.323305 kubelet[2691]: W0913 00:50:18.323296 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.323368 kubelet[2691]: E0913 00:50:18.323353 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.323580 kubelet[2691]: E0913 00:50:18.323572 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.323656 kubelet[2691]: W0913 00:50:18.323647 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.323722 kubelet[2691]: E0913 00:50:18.323706 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.323948 kubelet[2691]: E0913 00:50:18.323940 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.324029 kubelet[2691]: W0913 00:50:18.324020 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.324095 kubelet[2691]: E0913 00:50:18.324087 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.324309 kubelet[2691]: E0913 00:50:18.324301 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.324385 kubelet[2691]: W0913 00:50:18.324375 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.324445 kubelet[2691]: E0913 00:50:18.324438 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.324654 kubelet[2691]: E0913 00:50:18.324647 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.330008 kubelet[2691]: W0913 00:50:18.329972 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.330148 kubelet[2691]: E0913 00:50:18.330139 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.330433 kubelet[2691]: E0913 00:50:18.330423 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.330523 kubelet[2691]: W0913 00:50:18.330513 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.330587 kubelet[2691]: E0913 00:50:18.330578 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.330829 kubelet[2691]: E0913 00:50:18.330821 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.330909 kubelet[2691]: W0913 00:50:18.330900 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.330979 kubelet[2691]: E0913 00:50:18.330962 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.331527 kubelet[2691]: E0913 00:50:18.331516 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.331602 kubelet[2691]: W0913 00:50:18.331593 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.331664 kubelet[2691]: E0913 00:50:18.331656 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.331915 kubelet[2691]: E0913 00:50:18.331906 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.331977 kubelet[2691]: W0913 00:50:18.331969 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.332035 kubelet[2691]: E0913 00:50:18.332027 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.332387 kubelet[2691]: E0913 00:50:18.332378 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.332470 kubelet[2691]: W0913 00:50:18.332461 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.332529 kubelet[2691]: E0913 00:50:18.332521 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.333541 kubelet[2691]: E0913 00:50:18.333530 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.333621 kubelet[2691]: W0913 00:50:18.333611 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.333676 kubelet[2691]: E0913 00:50:18.333669 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.335032 kubelet[2691]: E0913 00:50:18.335002 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.335128 kubelet[2691]: W0913 00:50:18.335118 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.335198 kubelet[2691]: E0913 00:50:18.335189 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.335426 kubelet[2691]: E0913 00:50:18.335418 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:50:18.335489 kubelet[2691]: W0913 00:50:18.335481 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:50:18.335538 kubelet[2691]: E0913 00:50:18.335531 2691 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:50:18.842222 env[1756]: time="2025-09-13T00:50:18.842176000Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:18.846151 env[1756]: time="2025-09-13T00:50:18.846118226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:18.849496 env[1756]: time="2025-09-13T00:50:18.849082043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:18.851593 env[1756]: time="2025-09-13T00:50:18.851549119Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:18.852269 env[1756]: time="2025-09-13T00:50:18.852231559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:50:18.857408 env[1756]: time="2025-09-13T00:50:18.857249529Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:50:18.886676 env[1756]: time="2025-09-13T00:50:18.886608012Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159\"" Sep 13 00:50:18.888680 env[1756]: time="2025-09-13T00:50:18.887397965Z" level=info msg="StartContainer for \"a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159\"" Sep 13 00:50:18.987045 env[1756]: time="2025-09-13T00:50:18.986979478Z" level=info msg="StartContainer for \"a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159\" returns successfully" Sep 13 00:50:19.043342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159-rootfs.mount: Deactivated successfully. Sep 13 00:50:19.067670 env[1756]: time="2025-09-13T00:50:19.067615350Z" level=info msg="shim disconnected" id=a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159 Sep 13 00:50:19.067670 env[1756]: time="2025-09-13T00:50:19.067659475Z" level=warning msg="cleaning up after shim disconnected" id=a7c0c7ec3edb27c286f89d845509d536aefe3198c6ad46d0f4ae10e4cd1d1159 namespace=k8s.io Sep 13 00:50:19.067670 env[1756]: time="2025-09-13T00:50:19.067668854Z" level=info msg="cleaning up dead shim" Sep 13 00:50:19.076455 env[1756]: time="2025-09-13T00:50:19.076381717Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3551 runtime=io.containerd.runc.v2\n" Sep 13 00:50:19.135134 env[1756]: time="2025-09-13T00:50:19.135036724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:50:19.902689 kubelet[2691]: E0913 00:50:19.902642 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:21.902852 kubelet[2691]: E0913 00:50:21.902805 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:23.283207 env[1756]: time="2025-09-13T00:50:23.283156627Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.287495 env[1756]: time="2025-09-13T00:50:23.287393701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.290090 env[1756]: time="2025-09-13T00:50:23.290041497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.292404 env[1756]: time="2025-09-13T00:50:23.292360603Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.293097 env[1756]: time="2025-09-13T00:50:23.293060438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:50:23.297617 env[1756]: time="2025-09-13T00:50:23.297577386Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:50:23.329902 env[1756]: time="2025-09-13T00:50:23.329724523Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf\"" Sep 13 00:50:23.331504 env[1756]: time="2025-09-13T00:50:23.330611253Z" level=info msg="StartContainer for \"056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf\"" Sep 13 00:50:23.367657 systemd[1]: run-containerd-runc-k8s.io-056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf-runc.vhGFnO.mount: Deactivated successfully. Sep 13 00:50:23.410894 env[1756]: time="2025-09-13T00:50:23.406680306Z" level=info msg="StartContainer for \"056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf\" returns successfully" Sep 13 00:50:24.051918 kubelet[2691]: E0913 00:50:24.051850 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:24.382055 env[1756]: time="2025-09-13T00:50:24.381928245Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:50:24.408766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf-rootfs.mount: Deactivated successfully. Sep 13 00:50:24.415903 env[1756]: time="2025-09-13T00:50:24.415841306Z" level=info msg="shim disconnected" id=056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf Sep 13 00:50:24.415903 env[1756]: time="2025-09-13T00:50:24.415903113Z" level=warning msg="cleaning up after shim disconnected" id=056d0e0ac859b87e45be852dd06eeaeec0cccadc9f684aefebef4eac09d792bf namespace=k8s.io Sep 13 00:50:24.416163 env[1756]: time="2025-09-13T00:50:24.415913998Z" level=info msg="cleaning up dead shim" Sep 13 00:50:24.425143 env[1756]: time="2025-09-13T00:50:24.425040380Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3619 runtime=io.containerd.runc.v2\n" Sep 13 00:50:24.427591 kubelet[2691]: I0913 00:50:24.427561 2691 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:50:24.488105 kubelet[2691]: I0913 00:50:24.488064 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dfe4081-c1d4-427b-9b51-88e00048651f-config-volume\") pod \"coredns-7c65d6cfc9-nk5zv\" (UID: \"5dfe4081-c1d4-427b-9b51-88e00048651f\") " pod="kube-system/coredns-7c65d6cfc9-nk5zv" Sep 13 00:50:24.488105 kubelet[2691]: I0913 00:50:24.488112 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab504dd-aec7-4945-b513-319b96cc26d8-tigera-ca-bundle\") pod \"calico-kube-controllers-5775d679df-x29l7\" (UID: \"bab504dd-aec7-4945-b513-319b96cc26d8\") " pod="calico-system/calico-kube-controllers-5775d679df-x29l7" Sep 13 00:50:24.488285 kubelet[2691]: I0913 00:50:24.488129 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11cfa55c-0906-4684-a09a-1123f4382816-whisker-backend-key-pair\") pod \"whisker-649cf9f94c-x7w52\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " pod="calico-system/whisker-649cf9f94c-x7w52" Sep 13 00:50:24.488285 kubelet[2691]: I0913 00:50:24.488145 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11cfa55c-0906-4684-a09a-1123f4382816-whisker-ca-bundle\") pod \"whisker-649cf9f94c-x7w52\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " pod="calico-system/whisker-649cf9f94c-x7w52" Sep 13 00:50:24.488285 kubelet[2691]: I0913 00:50:24.488175 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkdx\" (UniqueName: \"kubernetes.io/projected/11cfa55c-0906-4684-a09a-1123f4382816-kube-api-access-qmkdx\") pod \"whisker-649cf9f94c-x7w52\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " pod="calico-system/whisker-649cf9f94c-x7w52" Sep 13 00:50:24.488285 kubelet[2691]: I0913 00:50:24.488195 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4wt\" (UniqueName: \"kubernetes.io/projected/5dfe4081-c1d4-427b-9b51-88e00048651f-kube-api-access-wj4wt\") pod \"coredns-7c65d6cfc9-nk5zv\" (UID: \"5dfe4081-c1d4-427b-9b51-88e00048651f\") " pod="kube-system/coredns-7c65d6cfc9-nk5zv" Sep 13 00:50:24.488285 kubelet[2691]: I0913 00:50:24.488210 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85d7l\" (UniqueName: \"kubernetes.io/projected/bab504dd-aec7-4945-b513-319b96cc26d8-kube-api-access-85d7l\") pod \"calico-kube-controllers-5775d679df-x29l7\" (UID: \"bab504dd-aec7-4945-b513-319b96cc26d8\") " pod="calico-system/calico-kube-controllers-5775d679df-x29l7" Sep 13 00:50:24.695280 kubelet[2691]: I0913 00:50:24.695213 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqd4v\" (UniqueName: \"kubernetes.io/projected/fda33809-2b03-4521-bce2-be3153adfcec-kube-api-access-jqd4v\") pod \"coredns-7c65d6cfc9-2frpj\" (UID: \"fda33809-2b03-4521-bce2-be3153adfcec\") " pod="kube-system/coredns-7c65d6cfc9-2frpj" Sep 13 00:50:24.695280 kubelet[2691]: I0913 00:50:24.695257 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3899ee84-120e-4dd4-9caa-e6d9f0157ae0-calico-apiserver-certs\") pod \"calico-apiserver-5dcbb86cdd-8mn66\" (UID: \"3899ee84-120e-4dd4-9caa-e6d9f0157ae0\") " pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" Sep 13 00:50:24.695280 kubelet[2691]: I0913 00:50:24.695280 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlp8\" (UniqueName: \"kubernetes.io/projected/7dc04780-57ee-4fd0-a262-21a1cfd3d394-kube-api-access-7nlp8\") pod \"goldmane-7988f88666-xnfhz\" (UID: \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\") " pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:24.695532 kubelet[2691]: I0913 00:50:24.695317 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc04780-57ee-4fd0-a262-21a1cfd3d394-config\") pod \"goldmane-7988f88666-xnfhz\" (UID: \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\") " pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:24.695532 kubelet[2691]: I0913 00:50:24.695336 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7dc04780-57ee-4fd0-a262-21a1cfd3d394-goldmane-key-pair\") pod \"goldmane-7988f88666-xnfhz\" (UID: \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\") " pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:24.695532 kubelet[2691]: I0913 00:50:24.695351 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmbt\" (UniqueName: \"kubernetes.io/projected/3899ee84-120e-4dd4-9caa-e6d9f0157ae0-kube-api-access-vnmbt\") pod \"calico-apiserver-5dcbb86cdd-8mn66\" (UID: \"3899ee84-120e-4dd4-9caa-e6d9f0157ae0\") " pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" Sep 13 00:50:24.695532 kubelet[2691]: I0913 00:50:24.695369 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fda33809-2b03-4521-bce2-be3153adfcec-config-volume\") pod \"coredns-7c65d6cfc9-2frpj\" (UID: \"fda33809-2b03-4521-bce2-be3153adfcec\") " pod="kube-system/coredns-7c65d6cfc9-2frpj" Sep 13 00:50:24.695532 kubelet[2691]: I0913 00:50:24.695387 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h84w\" (UniqueName: \"kubernetes.io/projected/71d9264d-8131-4edf-9956-3d6532ed3b91-kube-api-access-9h84w\") pod \"calico-apiserver-5dcbb86cdd-p4282\" (UID: \"71d9264d-8131-4edf-9956-3d6532ed3b91\") " pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" Sep 13 00:50:24.695689 kubelet[2691]: I0913 00:50:24.695406 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71d9264d-8131-4edf-9956-3d6532ed3b91-calico-apiserver-certs\") pod \"calico-apiserver-5dcbb86cdd-p4282\" (UID: \"71d9264d-8131-4edf-9956-3d6532ed3b91\") " pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" Sep 13 00:50:24.695689 kubelet[2691]: I0913 00:50:24.695421 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dc04780-57ee-4fd0-a262-21a1cfd3d394-goldmane-ca-bundle\") pod \"goldmane-7988f88666-xnfhz\" (UID: \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\") " pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:24.781168 env[1756]: time="2025-09-13T00:50:24.781129095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5775d679df-x29l7,Uid:bab504dd-aec7-4945-b513-319b96cc26d8,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:24.787240 env[1756]: time="2025-09-13T00:50:24.787197724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649cf9f94c-x7w52,Uid:11cfa55c-0906-4684-a09a-1123f4382816,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:24.792469 env[1756]: time="2025-09-13T00:50:24.792420947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nk5zv,Uid:5dfe4081-c1d4-427b-9b51-88e00048651f,Namespace:kube-system,Attempt:0,}" Sep 13 00:50:25.113237 env[1756]: time="2025-09-13T00:50:25.113115601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnfhz,Uid:7dc04780-57ee-4fd0-a262-21a1cfd3d394,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:25.114149 env[1756]: time="2025-09-13T00:50:25.114109969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-p4282,Uid:71d9264d-8131-4edf-9956-3d6532ed3b91,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:50:25.114731 env[1756]: time="2025-09-13T00:50:25.114698040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2frpj,Uid:fda33809-2b03-4521-bce2-be3153adfcec,Namespace:kube-system,Attempt:0,}" Sep 13 00:50:25.120613 env[1756]: time="2025-09-13T00:50:25.120568357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-8mn66,Uid:3899ee84-120e-4dd4-9caa-e6d9f0157ae0,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:50:25.161670 env[1756]: time="2025-09-13T00:50:25.161592299Z" level=error msg="Failed to destroy network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.162356 env[1756]: time="2025-09-13T00:50:25.162305973Z" level=error msg="encountered an error cleaning up failed sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.162482 env[1756]: time="2025-09-13T00:50:25.162384587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5775d679df-x29l7,Uid:bab504dd-aec7-4945-b513-319b96cc26d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.162898 kubelet[2691]: E0913 00:50:25.162745 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.162898 kubelet[2691]: E0913 00:50:25.162836 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5775d679df-x29l7" Sep 13 00:50:25.163673 kubelet[2691]: E0913 00:50:25.162865 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5775d679df-x29l7" Sep 13 00:50:25.163673 kubelet[2691]: E0913 00:50:25.163512 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5775d679df-x29l7_calico-system(bab504dd-aec7-4945-b513-319b96cc26d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5775d679df-x29l7_calico-system(bab504dd-aec7-4945-b513-319b96cc26d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5775d679df-x29l7" podUID="bab504dd-aec7-4945-b513-319b96cc26d8" Sep 13 00:50:25.169889 env[1756]: time="2025-09-13T00:50:25.169810351Z" level=error msg="Failed to destroy network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.170307 env[1756]: time="2025-09-13T00:50:25.170248589Z" level=error msg="encountered an error cleaning up failed sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.170410 env[1756]: time="2025-09-13T00:50:25.170328923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649cf9f94c-x7w52,Uid:11cfa55c-0906-4684-a09a-1123f4382816,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.170980 kubelet[2691]: E0913 00:50:25.170792 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.170980 kubelet[2691]: E0913 00:50:25.170852 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649cf9f94c-x7w52" Sep 13 00:50:25.170980 kubelet[2691]: E0913 00:50:25.170888 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649cf9f94c-x7w52" Sep 13 00:50:25.171197 kubelet[2691]: E0913 00:50:25.170935 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-649cf9f94c-x7w52_calico-system(11cfa55c-0906-4684-a09a-1123f4382816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-649cf9f94c-x7w52_calico-system(11cfa55c-0906-4684-a09a-1123f4382816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-649cf9f94c-x7w52" podUID="11cfa55c-0906-4684-a09a-1123f4382816" Sep 13 00:50:25.174894 kubelet[2691]: I0913 00:50:25.174515 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:25.186098 env[1756]: time="2025-09-13T00:50:25.186048975Z" level=info msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" Sep 13 00:50:25.193586 env[1756]: time="2025-09-13T00:50:25.193545152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:50:25.195972 kubelet[2691]: I0913 00:50:25.195019 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:25.196395 env[1756]: time="2025-09-13T00:50:25.196362674Z" level=info msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" Sep 13 00:50:25.278427 env[1756]: time="2025-09-13T00:50:25.278368405Z" level=error msg="Failed to destroy network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.278838 env[1756]: time="2025-09-13T00:50:25.278788756Z" level=error msg="encountered an error cleaning up failed sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.278953 env[1756]: time="2025-09-13T00:50:25.278898863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nk5zv,Uid:5dfe4081-c1d4-427b-9b51-88e00048651f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.279802 kubelet[2691]: E0913 00:50:25.279262 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.279802 kubelet[2691]: E0913 00:50:25.279345 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nk5zv" Sep 13 00:50:25.279802 kubelet[2691]: E0913 00:50:25.279387 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nk5zv" Sep 13 00:50:25.280043 kubelet[2691]: E0913 00:50:25.279471 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nk5zv_kube-system(5dfe4081-c1d4-427b-9b51-88e00048651f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nk5zv_kube-system(5dfe4081-c1d4-427b-9b51-88e00048651f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nk5zv" podUID="5dfe4081-c1d4-427b-9b51-88e00048651f" Sep 13 00:50:25.369294 env[1756]: time="2025-09-13T00:50:25.367755913Z" level=error msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" failed" error="failed to destroy network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.372785 kubelet[2691]: E0913 00:50:25.369761 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:25.372971 kubelet[2691]: E0913 00:50:25.372822 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474"} Sep 13 00:50:25.372971 kubelet[2691]: E0913 00:50:25.372937 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11cfa55c-0906-4684-a09a-1123f4382816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:25.373120 kubelet[2691]: E0913 00:50:25.372976 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11cfa55c-0906-4684-a09a-1123f4382816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-649cf9f94c-x7w52" podUID="11cfa55c-0906-4684-a09a-1123f4382816" Sep 13 00:50:25.377193 env[1756]: time="2025-09-13T00:50:25.377139942Z" level=error msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" failed" error="failed to destroy network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.377535 kubelet[2691]: E0913 00:50:25.377494 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:25.377654 kubelet[2691]: E0913 00:50:25.377548 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1"} Sep 13 00:50:25.377654 kubelet[2691]: E0913 00:50:25.377595 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bab504dd-aec7-4945-b513-319b96cc26d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:25.377654 kubelet[2691]: E0913 00:50:25.377625 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bab504dd-aec7-4945-b513-319b96cc26d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5775d679df-x29l7" podUID="bab504dd-aec7-4945-b513-319b96cc26d8" Sep 13 00:50:25.455573 env[1756]: time="2025-09-13T00:50:25.455512093Z" level=error msg="Failed to destroy network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.461500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4-shm.mount: Deactivated successfully. Sep 13 00:50:25.470967 env[1756]: time="2025-09-13T00:50:25.470858681Z" level=error msg="encountered an error cleaning up failed sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.471116 env[1756]: time="2025-09-13T00:50:25.471033664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2frpj,Uid:fda33809-2b03-4521-bce2-be3153adfcec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.471394 kubelet[2691]: E0913 00:50:25.471356 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.471487 kubelet[2691]: E0913 00:50:25.471441 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2frpj" Sep 13 00:50:25.471487 kubelet[2691]: E0913 00:50:25.471473 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2frpj" Sep 13 00:50:25.471590 kubelet[2691]: E0913 00:50:25.471562 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2frpj_kube-system(fda33809-2b03-4521-bce2-be3153adfcec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2frpj_kube-system(fda33809-2b03-4521-bce2-be3153adfcec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2frpj" podUID="fda33809-2b03-4521-bce2-be3153adfcec" Sep 13 00:50:25.516338 env[1756]: time="2025-09-13T00:50:25.516275792Z" level=error msg="Failed to destroy network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.522526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a-shm.mount: Deactivated successfully. Sep 13 00:50:25.524805 env[1756]: time="2025-09-13T00:50:25.524745317Z" level=error msg="encountered an error cleaning up failed sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.525006 env[1756]: time="2025-09-13T00:50:25.524823954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnfhz,Uid:7dc04780-57ee-4fd0-a262-21a1cfd3d394,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.525314 kubelet[2691]: E0913 00:50:25.525260 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.525412 kubelet[2691]: E0913 00:50:25.525328 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:25.525412 kubelet[2691]: E0913 00:50:25.525354 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xnfhz" Sep 13 00:50:25.525506 kubelet[2691]: E0913 00:50:25.525412 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-xnfhz_calico-system(7dc04780-57ee-4fd0-a262-21a1cfd3d394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-xnfhz_calico-system(7dc04780-57ee-4fd0-a262-21a1cfd3d394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-xnfhz" podUID="7dc04780-57ee-4fd0-a262-21a1cfd3d394" Sep 13 00:50:25.527175 env[1756]: time="2025-09-13T00:50:25.525188798Z" level=error msg="Failed to destroy network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.530386 env[1756]: time="2025-09-13T00:50:25.530326395Z" level=error msg="encountered an error cleaning up failed sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.530583 env[1756]: time="2025-09-13T00:50:25.530538044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-p4282,Uid:71d9264d-8131-4edf-9956-3d6532ed3b91,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.533020 kubelet[2691]: E0913 00:50:25.532026 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.533020 kubelet[2691]: E0913 00:50:25.532084 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" Sep 13 00:50:25.533020 kubelet[2691]: E0913 00:50:25.532110 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" Sep 13 00:50:25.532674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a-shm.mount: Deactivated successfully. Sep 13 00:50:25.533393 kubelet[2691]: E0913 00:50:25.532157 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dcbb86cdd-p4282_calico-apiserver(71d9264d-8131-4edf-9956-3d6532ed3b91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dcbb86cdd-p4282_calico-apiserver(71d9264d-8131-4edf-9956-3d6532ed3b91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" podUID="71d9264d-8131-4edf-9956-3d6532ed3b91" Sep 13 00:50:25.538238 env[1756]: time="2025-09-13T00:50:25.538180016Z" level=error msg="Failed to destroy network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.545034 env[1756]: time="2025-09-13T00:50:25.538561441Z" level=error msg="encountered an error cleaning up failed sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.545034 env[1756]: time="2025-09-13T00:50:25.538633588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-8mn66,Uid:3899ee84-120e-4dd4-9caa-e6d9f0157ae0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.541700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06-shm.mount: Deactivated successfully. Sep 13 00:50:25.545311 kubelet[2691]: E0913 00:50:25.538916 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.545311 kubelet[2691]: E0913 00:50:25.538982 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" Sep 13 00:50:25.545311 kubelet[2691]: E0913 00:50:25.539010 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" Sep 13 00:50:25.545495 kubelet[2691]: E0913 00:50:25.539059 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dcbb86cdd-8mn66_calico-apiserver(3899ee84-120e-4dd4-9caa-e6d9f0157ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dcbb86cdd-8mn66_calico-apiserver(3899ee84-120e-4dd4-9caa-e6d9f0157ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" podUID="3899ee84-120e-4dd4-9caa-e6d9f0157ae0" Sep 13 00:50:25.906114 env[1756]: time="2025-09-13T00:50:25.906065470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbdlb,Uid:543e9814-38b9-4890-8c16-f362d4a3151e,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:25.977691 env[1756]: time="2025-09-13T00:50:25.977614426Z" level=error msg="Failed to destroy network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.978189 env[1756]: time="2025-09-13T00:50:25.978143381Z" level=error msg="encountered an error cleaning up failed sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.978307 env[1756]: time="2025-09-13T00:50:25.978228752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbdlb,Uid:543e9814-38b9-4890-8c16-f362d4a3151e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.978984 kubelet[2691]: E0913 00:50:25.978551 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:25.978984 kubelet[2691]: E0913 00:50:25.978612 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:25.978984 kubelet[2691]: E0913 00:50:25.978636 2691 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbdlb" Sep 13 00:50:25.979185 kubelet[2691]: E0913 00:50:25.978700 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbdlb_calico-system(543e9814-38b9-4890-8c16-f362d4a3151e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbdlb_calico-system(543e9814-38b9-4890-8c16-f362d4a3151e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:26.198054 kubelet[2691]: I0913 00:50:26.198008 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:26.200029 env[1756]: time="2025-09-13T00:50:26.199981407Z" level=info msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" Sep 13 00:50:26.202588 kubelet[2691]: I0913 00:50:26.201717 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:26.202863 env[1756]: time="2025-09-13T00:50:26.202827392Z" level=info msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" Sep 13 00:50:26.204838 kubelet[2691]: I0913 00:50:26.204422 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:26.205123 env[1756]: time="2025-09-13T00:50:26.205099656Z" level=info msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" Sep 13 00:50:26.207708 kubelet[2691]: I0913 00:50:26.207379 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:26.207987 env[1756]: time="2025-09-13T00:50:26.207949837Z" level=info msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" Sep 13 00:50:26.209228 kubelet[2691]: I0913 00:50:26.209202 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:26.209681 env[1756]: time="2025-09-13T00:50:26.209656404Z" level=info msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" Sep 13 00:50:26.212021 kubelet[2691]: I0913 00:50:26.211662 2691 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:26.212994 env[1756]: time="2025-09-13T00:50:26.212970092Z" level=info msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" Sep 13 00:50:26.316310 env[1756]: time="2025-09-13T00:50:26.316242900Z" level=error msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" failed" error="failed to destroy network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.316787 kubelet[2691]: E0913 00:50:26.316634 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:26.316787 kubelet[2691]: E0913 00:50:26.316677 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a"} Sep 13 00:50:26.316787 kubelet[2691]: E0913 00:50:26.316730 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71d9264d-8131-4edf-9956-3d6532ed3b91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.316787 kubelet[2691]: E0913 00:50:26.316751 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71d9264d-8131-4edf-9956-3d6532ed3b91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" podUID="71d9264d-8131-4edf-9956-3d6532ed3b91" Sep 13 00:50:26.319520 env[1756]: time="2025-09-13T00:50:26.319475782Z" level=error msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" failed" error="failed to destroy network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.319938 kubelet[2691]: E0913 00:50:26.319778 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:26.319938 kubelet[2691]: E0913 00:50:26.319818 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807"} Sep 13 00:50:26.319938 kubelet[2691]: E0913 00:50:26.319856 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dfe4081-c1d4-427b-9b51-88e00048651f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.319938 kubelet[2691]: E0913 00:50:26.319892 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dfe4081-c1d4-427b-9b51-88e00048651f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nk5zv" podUID="5dfe4081-c1d4-427b-9b51-88e00048651f" Sep 13 00:50:26.327348 env[1756]: time="2025-09-13T00:50:26.327298314Z" level=error msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" failed" error="failed to destroy network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.336323 kubelet[2691]: E0913 00:50:26.336264 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:26.336495 kubelet[2691]: E0913 00:50:26.336332 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06"} Sep 13 00:50:26.336495 kubelet[2691]: E0913 00:50:26.336391 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3899ee84-120e-4dd4-9caa-e6d9f0157ae0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.336495 kubelet[2691]: E0913 00:50:26.336415 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3899ee84-120e-4dd4-9caa-e6d9f0157ae0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" podUID="3899ee84-120e-4dd4-9caa-e6d9f0157ae0" Sep 13 00:50:26.350160 env[1756]: time="2025-09-13T00:50:26.350013850Z" level=error msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" failed" error="failed to destroy network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.353270 kubelet[2691]: E0913 00:50:26.353221 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:26.353450 kubelet[2691]: E0913 00:50:26.353274 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a"} Sep 13 00:50:26.353450 kubelet[2691]: E0913 00:50:26.353312 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.353450 kubelet[2691]: E0913 00:50:26.353334 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dc04780-57ee-4fd0-a262-21a1cfd3d394\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-xnfhz" podUID="7dc04780-57ee-4fd0-a262-21a1cfd3d394" Sep 13 00:50:26.353596 env[1756]: time="2025-09-13T00:50:26.353443972Z" level=error msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" failed" error="failed to destroy network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.353596 env[1756]: time="2025-09-13T00:50:26.353556864Z" level=error msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" failed" error="failed to destroy network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:50:26.353705 kubelet[2691]: E0913 00:50:26.353673 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:26.353743 kubelet[2691]: E0913 00:50:26.353709 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4"} Sep 13 00:50:26.353743 kubelet[2691]: E0913 00:50:26.353732 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fda33809-2b03-4521-bce2-be3153adfcec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.353979 kubelet[2691]: E0913 00:50:26.353748 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fda33809-2b03-4521-bce2-be3153adfcec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2frpj" podUID="fda33809-2b03-4521-bce2-be3153adfcec" Sep 13 00:50:26.353979 kubelet[2691]: E0913 00:50:26.353770 2691 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:26.353979 kubelet[2691]: E0913 00:50:26.353783 2691 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa"} Sep 13 00:50:26.353979 kubelet[2691]: E0913 00:50:26.353808 2691 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"543e9814-38b9-4890-8c16-f362d4a3151e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:50:26.354145 kubelet[2691]: E0913 00:50:26.353823 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"543e9814-38b9-4890-8c16-f362d4a3151e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbdlb" podUID="543e9814-38b9-4890-8c16-f362d4a3151e" Sep 13 00:50:31.526596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101321680.mount: Deactivated successfully. Sep 13 00:50:31.574433 env[1756]: time="2025-09-13T00:50:31.574355391Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:31.582654 env[1756]: time="2025-09-13T00:50:31.582318475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:31.586318 env[1756]: time="2025-09-13T00:50:31.586261405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:31.590913 env[1756]: time="2025-09-13T00:50:31.590858901Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:31.591687 env[1756]: time="2025-09-13T00:50:31.591662519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:50:31.636207 env[1756]: time="2025-09-13T00:50:31.636165781Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:50:31.664608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595968563.mount: Deactivated successfully. Sep 13 00:50:31.677071 env[1756]: time="2025-09-13T00:50:31.676889637Z" level=info msg="CreateContainer within sandbox \"ea000c5bd3bd64cb286e3b8c7bd3ba9b6bcdcdc03aacafa27142b36c02f734b5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32\"" Sep 13 00:50:31.679927 env[1756]: time="2025-09-13T00:50:31.677969363Z" level=info msg="StartContainer for \"478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32\"" Sep 13 00:50:31.764056 env[1756]: time="2025-09-13T00:50:31.763632974Z" level=info msg="StartContainer for \"478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32\" returns successfully" Sep 13 00:50:32.138028 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:50:32.138181 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:50:32.297322 kubelet[2691]: I0913 00:50:32.289469 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ddf5z" podStartSLOduration=1.501850287 podStartE2EDuration="18.28246223s" podCreationTimestamp="2025-09-13 00:50:14 +0000 UTC" firstStartedPulling="2025-09-13 00:50:14.812360953 +0000 UTC m=+23.119989882" lastFinishedPulling="2025-09-13 00:50:31.592972896 +0000 UTC m=+39.900601825" observedRunningTime="2025-09-13 00:50:32.281808287 +0000 UTC m=+40.589437240" watchObservedRunningTime="2025-09-13 00:50:32.28246223 +0000 UTC m=+40.590091182" Sep 13 00:50:32.481128 env[1756]: time="2025-09-13T00:50:32.478986837Z" level=info msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.681 [INFO][4048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.683 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" iface="eth0" netns="/var/run/netns/cni-89d93aa8-2ce0-6db3-2da0-4dbc5814e0ee" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.684 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" iface="eth0" netns="/var/run/netns/cni-89d93aa8-2ce0-6db3-2da0-4dbc5814e0ee" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.685 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" iface="eth0" netns="/var/run/netns/cni-89d93aa8-2ce0-6db3-2da0-4dbc5814e0ee" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.685 [INFO][4048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.686 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.986 [INFO][4055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.988 [INFO][4055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:32.990 [INFO][4055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:33.011 [WARNING][4055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:33.011 [INFO][4055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:33.013 [INFO][4055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:33.018019 env[1756]: 2025-09-13 00:50:33.015 [INFO][4048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:33.028364 env[1756]: time="2025-09-13T00:50:33.018152397Z" level=info msg="TearDown network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" successfully" Sep 13 00:50:33.028364 env[1756]: time="2025-09-13T00:50:33.018189960Z" level=info msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" returns successfully" Sep 13 00:50:33.023663 systemd[1]: run-netns-cni\x2d89d93aa8\x2d2ce0\x2d6db3\x2d2da0\x2d4dbc5814e0ee.mount: Deactivated successfully. Sep 13 00:50:33.189383 kubelet[2691]: I0913 00:50:33.189318 2691 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11cfa55c-0906-4684-a09a-1123f4382816-whisker-backend-key-pair\") pod \"11cfa55c-0906-4684-a09a-1123f4382816\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " Sep 13 00:50:33.189608 kubelet[2691]: I0913 00:50:33.189459 2691 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmkdx\" (UniqueName: \"kubernetes.io/projected/11cfa55c-0906-4684-a09a-1123f4382816-kube-api-access-qmkdx\") pod \"11cfa55c-0906-4684-a09a-1123f4382816\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " Sep 13 00:50:33.189608 kubelet[2691]: I0913 00:50:33.189508 2691 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11cfa55c-0906-4684-a09a-1123f4382816-whisker-ca-bundle\") pod \"11cfa55c-0906-4684-a09a-1123f4382816\" (UID: \"11cfa55c-0906-4684-a09a-1123f4382816\") " Sep 13 00:50:33.207140 kubelet[2691]: I0913 00:50:33.203317 2691 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cfa55c-0906-4684-a09a-1123f4382816-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "11cfa55c-0906-4684-a09a-1123f4382816" (UID: "11cfa55c-0906-4684-a09a-1123f4382816"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:50:33.213258 systemd[1]: var-lib-kubelet-pods-11cfa55c\x2d0906\x2d4684\x2da09a\x2d1123f4382816-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqmkdx.mount: Deactivated successfully. Sep 13 00:50:33.213409 systemd[1]: var-lib-kubelet-pods-11cfa55c\x2d0906\x2d4684\x2da09a\x2d1123f4382816-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:50:33.215416 kubelet[2691]: I0913 00:50:33.214413 2691 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cfa55c-0906-4684-a09a-1123f4382816-kube-api-access-qmkdx" (OuterVolumeSpecName: "kube-api-access-qmkdx") pod "11cfa55c-0906-4684-a09a-1123f4382816" (UID: "11cfa55c-0906-4684-a09a-1123f4382816"). InnerVolumeSpecName "kube-api-access-qmkdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:50:33.215989 kubelet[2691]: I0913 00:50:33.215962 2691 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cfa55c-0906-4684-a09a-1123f4382816-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "11cfa55c-0906-4684-a09a-1123f4382816" (UID: "11cfa55c-0906-4684-a09a-1123f4382816"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:50:33.242601 kubelet[2691]: I0913 00:50:33.241162 2691 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:50:33.290943 kubelet[2691]: I0913 00:50:33.290796 2691 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11cfa55c-0906-4684-a09a-1123f4382816-whisker-ca-bundle\") on node \"ip-172-31-30-243\" DevicePath \"\"" Sep 13 00:50:33.290943 kubelet[2691]: I0913 00:50:33.290848 2691 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11cfa55c-0906-4684-a09a-1123f4382816-whisker-backend-key-pair\") on node \"ip-172-31-30-243\" DevicePath \"\"" Sep 13 00:50:33.290943 kubelet[2691]: I0913 00:50:33.290866 2691 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmkdx\" (UniqueName: \"kubernetes.io/projected/11cfa55c-0906-4684-a09a-1123f4382816-kube-api-access-qmkdx\") on node \"ip-172-31-30-243\" DevicePath \"\"" Sep 13 00:50:33.493066 kubelet[2691]: I0913 00:50:33.493006 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvfvt\" (UniqueName: \"kubernetes.io/projected/ec2ce1d7-66ab-4edb-a954-03dd28778881-kube-api-access-zvfvt\") pod \"whisker-6cbb85f794-lvdgm\" (UID: \"ec2ce1d7-66ab-4edb-a954-03dd28778881\") " pod="calico-system/whisker-6cbb85f794-lvdgm" Sep 13 00:50:33.493663 kubelet[2691]: I0913 00:50:33.493129 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2ce1d7-66ab-4edb-a954-03dd28778881-whisker-backend-key-pair\") pod \"whisker-6cbb85f794-lvdgm\" (UID: \"ec2ce1d7-66ab-4edb-a954-03dd28778881\") " pod="calico-system/whisker-6cbb85f794-lvdgm" Sep 13 00:50:33.493663 kubelet[2691]: I0913 00:50:33.493185 2691 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec2ce1d7-66ab-4edb-a954-03dd28778881-whisker-ca-bundle\") pod \"whisker-6cbb85f794-lvdgm\" (UID: \"ec2ce1d7-66ab-4edb-a954-03dd28778881\") " pod="calico-system/whisker-6cbb85f794-lvdgm" Sep 13 00:50:33.531022 systemd[1]: run-containerd-runc-k8s.io-478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32-runc.i6v13S.mount: Deactivated successfully. Sep 13 00:50:33.640581 env[1756]: time="2025-09-13T00:50:33.640127027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cbb85f794-lvdgm,Uid:ec2ce1d7-66ab-4edb-a954-03dd28778881,Namespace:calico-system,Attempt:0,}" Sep 13 00:50:33.819525 (udev-worker)[4138]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:33.823537 systemd-networkd[1433]: cali2d162fef528: Link UP Sep 13 00:50:33.830918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:33.831170 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d162fef528: link becomes ready Sep 13 00:50:33.840857 systemd-networkd[1433]: cali2d162fef528: Gained carrier Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.696 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.710 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0 whisker-6cbb85f794- calico-system ec2ce1d7-66ab-4edb-a954-03dd28778881 885 0 2025-09-13 00:50:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cbb85f794 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-243 whisker-6cbb85f794-lvdgm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d162fef528 [] [] }} ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.710 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.748 [INFO][4131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" HandleID="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Workload="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.748 [INFO][4131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" HandleID="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Workload="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-243", "pod":"whisker-6cbb85f794-lvdgm", "timestamp":"2025-09-13 00:50:33.748496945 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.749 [INFO][4131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.749 [INFO][4131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.749 [INFO][4131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.759 [INFO][4131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.772 [INFO][4131] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.777 [INFO][4131] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.780 [INFO][4131] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.782 [INFO][4131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.782 [INFO][4131] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.784 [INFO][4131] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512 Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.792 [INFO][4131] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.800 [INFO][4131] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.193/26] block=192.168.124.192/26 handle="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.800 [INFO][4131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.193/26] handle="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" host="ip-172-31-30-243" Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.800 [INFO][4131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:33.859826 env[1756]: 2025-09-13 00:50:33.800 [INFO][4131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.193/26] IPv6=[] ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" HandleID="k8s-pod-network.6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Workload="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.803 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0", GenerateName:"whisker-6cbb85f794-", Namespace:"calico-system", SelfLink:"", UID:"ec2ce1d7-66ab-4edb-a954-03dd28778881", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cbb85f794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"whisker-6cbb85f794-lvdgm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d162fef528", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.803 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.193/32] ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.803 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d162fef528 ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.831 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.832 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0", GenerateName:"whisker-6cbb85f794-", Namespace:"calico-system", SelfLink:"", UID:"ec2ce1d7-66ab-4edb-a954-03dd28778881", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cbb85f794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512", Pod:"whisker-6cbb85f794-lvdgm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d162fef528", MAC:"86:02:f4:d6:45:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:33.862224 env[1756]: 2025-09-13 00:50:33.857 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512" Namespace="calico-system" Pod="whisker-6cbb85f794-lvdgm" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--6cbb85f794--lvdgm-eth0" Sep 13 00:50:33.874288 env[1756]: time="2025-09-13T00:50:33.874140613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:33.874462 env[1756]: time="2025-09-13T00:50:33.874312723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:33.874462 env[1756]: time="2025-09-13T00:50:33.874367502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:33.874715 env[1756]: time="2025-09-13T00:50:33.874675792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512 pid=4153 runtime=io.containerd.runc.v2 Sep 13 00:50:33.929181 kubelet[2691]: I0913 00:50:33.929133 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cfa55c-0906-4684-a09a-1123f4382816" path="/var/lib/kubelet/pods/11cfa55c-0906-4684-a09a-1123f4382816/volumes" Sep 13 00:50:34.005262 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 13 00:50:34.006088 kernel: audit: type=1400 audit(1757724633.995:299): avc: denied { write } for pid=4222 comm="tee" name="fd" dev="proc" ino=25233 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:33.995000 audit[4222]: AVC avc: denied { write } for pid=4222 comm="tee" name="fd" dev="proc" ino=25233 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:33.995000 audit[4222]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd66ddf7e5 a2=241 a3=1b6 items=1 ppid=4199 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.024480 kernel: audit: type=1300 audit(1757724633.995:299): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd66ddf7e5 a2=241 a3=1b6 items=1 ppid=4199 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.024565 kernel: audit: type=1307 audit(1757724633.995:299): cwd="/etc/service/enabled/cni/log" Sep 13 00:50:33.995000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:50:33.995000 audit: PATH item=0 name="/dev/fd/63" inode=25228 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.035407 kernel: audit: type=1302 audit(1757724633.995:299): item=0 name="/dev/fd/63" inode=25228 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:33.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.046915 kernel: audit: type=1327 audit(1757724633.995:299): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.052905 env[1756]: time="2025-09-13T00:50:34.050466793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cbb85f794-lvdgm,Uid:ec2ce1d7-66ab-4edb-a954-03dd28778881,Namespace:calico-system,Attempt:0,} returns sandbox id \"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512\"" Sep 13 00:50:34.062712 env[1756]: time="2025-09-13T00:50:34.062637233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:50:34.071000 audit[4237]: AVC avc: denied { write } for pid=4237 comm="tee" name="fd" dev="proc" ino=26154 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.080917 kernel: audit: type=1400 audit(1757724634.071:300): avc: denied { write } for pid=4237 comm="tee" name="fd" dev="proc" ino=26154 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.071000 audit[4237]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe636527e3 a2=241 a3=1b6 items=1 ppid=4192 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.094901 kernel: audit: type=1300 audit(1757724634.071:300): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe636527e3 a2=241 a3=1b6 items=1 ppid=4192 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.071000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:50:34.071000 audit: PATH item=0 name="/dev/fd/63" inode=26138 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.106006 kernel: audit: type=1307 audit(1757724634.071:300): cwd="/etc/service/enabled/bird6/log" Sep 13 00:50:34.106122 kernel: audit: type=1302 audit(1757724634.071:300): item=0 name="/dev/fd/63" inode=26138 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.114896 kernel: audit: type=1327 audit(1757724634.071:300): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.083000 audit[4245]: AVC avc: denied { write } for pid=4245 comm="tee" name="fd" dev="proc" ino=26167 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.083000 audit[4245]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff8e2367e4 a2=241 a3=1b6 items=1 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.083000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:50:34.083000 audit: PATH item=0 name="/dev/fd/63" inode=26148 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.083000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.083000 audit[4257]: AVC avc: denied { write } for pid=4257 comm="tee" name="fd" dev="proc" ino=25248 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.083000 audit[4257]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe6ffd47e3 a2=241 a3=1b6 items=1 ppid=4200 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.083000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:50:34.083000 audit: PATH item=0 name="/dev/fd/63" inode=25245 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.083000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.089000 audit[4227]: AVC avc: denied { write } for pid=4227 comm="tee" name="fd" dev="proc" ino=25250 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.089000 audit[4227]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce46197d3 a2=241 a3=1b6 items=1 ppid=4203 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.089000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:50:34.089000 audit: PATH item=0 name="/dev/fd/63" inode=25237 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.106000 audit[4252]: AVC avc: denied { write } for pid=4252 comm="tee" name="fd" dev="proc" ino=25254 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.106000 audit[4255]: AVC avc: denied { write } for pid=4255 comm="tee" name="fd" dev="proc" ino=26171 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:50:34.106000 audit[4252]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf22847e3 a2=241 a3=1b6 items=1 ppid=4195 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.106000 audit[4255]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfeebd7d4 a2=241 a3=1b6 items=1 ppid=4188 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.106000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:50:34.106000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:50:34.106000 audit: PATH item=0 name="/dev/fd/63" inode=26157 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.106000 audit: PATH item=0 name="/dev/fd/63" inode=26160 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:34.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.562000 audit: BPF prog-id=10 op=LOAD Sep 13 00:50:34.562000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc32be890 a2=98 a3=1fffffffffffffff items=0 ppid=4202 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.562000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:50:34.563000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit: BPF prog-id=11 op=LOAD Sep 13 00:50:34.564000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc32be770 a2=94 a3=3 items=0 ppid=4202 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:50:34.564000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { bpf } for pid=4307 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit: BPF prog-id=12 op=LOAD Sep 13 00:50:34.564000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc32be7b0 a2=94 a3=7fffc32be990 items=0 ppid=4202 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:50:34.564000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:50:34.564000 audit[4307]: AVC avc: denied { perfmon } for pid=4307 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.564000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffc32be880 a2=50 a3=a000000085 items=0 ppid=4202 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.570000 audit: BPF prog-id=13 op=LOAD Sep 13 00:50:34.570000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6e9495f0 a2=98 a3=3 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.572000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit: BPF prog-id=14 op=LOAD Sep 13 00:50:34.579000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6e9493e0 a2=94 a3=54428f items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.579000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.579000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.579000 audit: BPF prog-id=15 op=LOAD Sep 13 00:50:34.579000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6e949410 a2=94 a3=2 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.579000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.579000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit: BPF prog-id=16 op=LOAD Sep 13 00:50:34.757000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6e9492d0 a2=94 a3=1 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.757000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.757000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:50:34.757000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.757000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff6e9493a0 a2=50 a3=7fff6e949480 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.757000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e9492e0 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff6e949310 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff6e949220 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e949330 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e949310 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e949300 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e949330 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff6e949310 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff6e949330 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff6e949300 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff6e949370 a2=28 a3=0 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff6e949120 a2=50 a3=1 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit: BPF prog-id=17 op=LOAD Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6e949120 a2=94 a3=5 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff6e9491d0 a2=50 a3=1 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.768000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.768000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff6e9492f0 a2=4 a3=38 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { confidentiality } for pid=4308 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:34.769000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff6e949340 a2=94 a3=6 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { confidentiality } for pid=4308 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:34.769000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff6e948af0 a2=94 a3=88 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { perfmon } for pid=4308 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { bpf } for pid=4308 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.769000 audit[4308]: AVC avc: denied { confidentiality } for pid=4308 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:34.769000 audit[4308]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff6e948af0 a2=94 a3=88 items=0 ppid=4202 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit: BPF prog-id=18 op=LOAD Sep 13 00:50:34.782000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1a272b0 a2=98 a3=1999999999999999 items=0 ppid=4202 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:50:34.782000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit: BPF prog-id=19 op=LOAD Sep 13 00:50:34.782000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1a27190 a2=94 a3=ffff items=0 ppid=4202 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:50:34.782000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { perfmon } for pid=4331 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit[4331]: AVC avc: denied { bpf } for pid=4331 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.782000 audit: BPF prog-id=20 op=LOAD Sep 13 00:50:34.782000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1a271d0 a2=94 a3=7fffa1a273b0 items=0 ppid=4202 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:50:34.782000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:50:34.896107 systemd-networkd[1433]: vxlan.calico: Link UP Sep 13 00:50:34.896121 systemd-networkd[1433]: vxlan.calico: Gained carrier Sep 13 00:50:34.910948 (udev-worker)[4026]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.921000 audit: BPF prog-id=21 op=LOAD Sep 13 00:50:34.921000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd393e7030 a2=98 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.921000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.921000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit: BPF prog-id=22 op=LOAD Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd393e6e40 a2=94 a3=54428f items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit: BPF prog-id=23 op=LOAD Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd393e6e70 a2=94 a3=2 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6d40 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd393e6d70 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd393e6c80 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6d90 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6d70 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6d60 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6d90 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd393e6d70 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd393e6d90 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd393e6d60 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd393e6dd0 a2=28 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.922000 audit: BPF prog-id=24 op=LOAD Sep 13 00:50:34.922000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd393e6c40 a2=94 a3=0 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.922000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.922000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd393e6c30 a2=50 a3=2800 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.923000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd393e6c30 a2=50 a3=2800 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.923000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.923000 audit: BPF prog-id=25 op=LOAD Sep 13 00:50:34.923000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd393e6450 a2=94 a3=2 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.923000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.924000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.924000 audit: BPF prog-id=26 op=LOAD Sep 13 00:50:34.924000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd393e6550 a2=94 a3=30 items=0 ppid=4202 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.924000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.928000 audit: BPF prog-id=27 op=LOAD Sep 13 00:50:34.928000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff52da1030 a2=98 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.928000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:34.928000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit: BPF prog-id=28 op=LOAD Sep 13 00:50:34.929000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52da0e20 a2=94 a3=54428f items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.929000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:34.929000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:34.929000 audit: BPF prog-id=29 op=LOAD Sep 13 00:50:34.929000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52da0e50 a2=94 a3=2 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:34.929000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:34.929000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit: BPF prog-id=30 op=LOAD Sep 13 00:50:35.058000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52da0d10 a2=94 a3=1 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.058000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.058000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:50:35.058000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.058000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff52da0de0 a2=50 a3=7fff52da0ec0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.058000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0d20 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52da0d50 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52da0c60 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0d70 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0d50 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0d40 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0d70 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52da0d50 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52da0d70 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52da0d40 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.069000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.069000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52da0db0 a2=28 a3=0 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff52da0b60 a2=50 a3=1 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit: BPF prog-id=31 op=LOAD Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff52da0b60 a2=94 a3=5 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff52da0c10 a2=50 a3=1 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff52da0d30 a2=4 a3=38 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { confidentiality } for pid=4359 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52da0d80 a2=94 a3=6 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { confidentiality } for pid=4359 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52da0530 a2=94 a3=88 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.070000 audit[4359]: AVC avc: denied { confidentiality } for pid=4359 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:50:35.070000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52da0530 a2=94 a3=88 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.071000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.071000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff52da1f60 a2=10 a3=f8f00800 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.071000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.071000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.071000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff52da1e00 a2=10 a3=3 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.071000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.071000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.071000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff52da1da0 a2=10 a3=3 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.071000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.071000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:50:35.071000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff52da1da0 a2=10 a3=7 items=0 ppid=4202 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.071000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:50:35.082000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:50:35.155000 audit[4387]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:35.155000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffddee77550 a2=0 a3=7ffddee7753c items=0 ppid=4202 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:35.166000 audit[4386]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:35.166000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd267865a0 a2=0 a3=7ffd2678658c items=0 ppid=4202 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.166000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:35.168000 audit[4385]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:35.168000 audit[4385]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc6c3c6ab0 a2=0 a3=7ffc6c3c6a9c items=0 ppid=4202 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.168000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:35.185000 audit[4392]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=4392 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:35.185000 audit[4392]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc0ba19ae0 a2=0 a3=7ffc0ba19acc items=0 ppid=4202 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:35.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:35.653460 env[1756]: time="2025-09-13T00:50:35.653411588Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:35.657518 env[1756]: time="2025-09-13T00:50:35.657460569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:35.660345 env[1756]: time="2025-09-13T00:50:35.660298484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:35.663443 env[1756]: time="2025-09-13T00:50:35.663361649Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:35.664241 env[1756]: time="2025-09-13T00:50:35.664205107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:50:35.667312 env[1756]: time="2025-09-13T00:50:35.666683951Z" level=info msg="CreateContainer within sandbox \"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:50:35.671586 systemd-networkd[1433]: cali2d162fef528: Gained IPv6LL Sep 13 00:50:35.695772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732153488.mount: Deactivated successfully. Sep 13 00:50:35.708437 env[1756]: time="2025-09-13T00:50:35.708363776Z" level=info msg="CreateContainer within sandbox \"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"da63ee4b588c997012ef6a41c1daf64de867e01c06b79060e7a5d0ccd246b31c\"" Sep 13 00:50:35.710196 env[1756]: time="2025-09-13T00:50:35.709268185Z" level=info msg="StartContainer for \"da63ee4b588c997012ef6a41c1daf64de867e01c06b79060e7a5d0ccd246b31c\"" Sep 13 00:50:35.800986 env[1756]: time="2025-09-13T00:50:35.800857100Z" level=info msg="StartContainer for \"da63ee4b588c997012ef6a41c1daf64de867e01c06b79060e7a5d0ccd246b31c\" returns successfully" Sep 13 00:50:35.803578 env[1756]: time="2025-09-13T00:50:35.803059682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:50:35.989126 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Sep 13 00:50:38.329781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144324897.mount: Deactivated successfully. Sep 13 00:50:38.347282 env[1756]: time="2025-09-13T00:50:38.347224911Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.349253 env[1756]: time="2025-09-13T00:50:38.349210164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.351284 env[1756]: time="2025-09-13T00:50:38.351248669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.352688 env[1756]: time="2025-09-13T00:50:38.352654022Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.353306 env[1756]: time="2025-09-13T00:50:38.353272093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:50:38.356129 env[1756]: time="2025-09-13T00:50:38.356096003Z" level=info msg="CreateContainer within sandbox \"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:50:38.375766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834581422.mount: Deactivated successfully. Sep 13 00:50:38.382849 env[1756]: time="2025-09-13T00:50:38.382770325Z" level=info msg="CreateContainer within sandbox \"6784535af26d5fb308bb7f5c994f7e1341dc281e65c1d12e01aab5cbc3365512\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"49f965b7a4716bb3bb32cbfcc9b66c829304e7cf05f2a7f67a3cd66efb881f32\"" Sep 13 00:50:38.383650 env[1756]: time="2025-09-13T00:50:38.383621256Z" level=info msg="StartContainer for \"49f965b7a4716bb3bb32cbfcc9b66c829304e7cf05f2a7f67a3cd66efb881f32\"" Sep 13 00:50:38.493429 env[1756]: time="2025-09-13T00:50:38.493374571Z" level=info msg="StartContainer for \"49f965b7a4716bb3bb32cbfcc9b66c829304e7cf05f2a7f67a3cd66efb881f32\" returns successfully" Sep 13 00:50:38.903551 env[1756]: time="2025-09-13T00:50:38.903496380Z" level=info msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" Sep 13 00:50:38.903813 env[1756]: time="2025-09-13T00:50:38.903782652Z" level=info msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" Sep 13 00:50:38.904199 env[1756]: time="2025-09-13T00:50:38.903750704Z" level=info msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" Sep 13 00:50:38.904579 env[1756]: time="2025-09-13T00:50:38.904558974Z" level=info msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" iface="eth0" netns="/var/run/netns/cni-91adf8a0-5abf-1799-7cbb-3259a5a18b60" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" iface="eth0" netns="/var/run/netns/cni-91adf8a0-5abf-1799-7cbb-3259a5a18b60" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" iface="eth0" netns="/var/run/netns/cni-91adf8a0-5abf-1799-7cbb-3259a5a18b60" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.055 [INFO][4515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.215 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.215 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.215 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.228 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.228 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.230 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:39.237866 env[1756]: 2025-09-13 00:50:39.233 [INFO][4515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:39.242633 systemd[1]: run-netns-cni\x2d91adf8a0\x2d5abf\x2d1799\x2d7cbb\x2d3259a5a18b60.mount: Deactivated successfully. Sep 13 00:50:39.243292 env[1756]: time="2025-09-13T00:50:39.243234563Z" level=info msg="TearDown network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" successfully" Sep 13 00:50:39.243436 env[1756]: time="2025-09-13T00:50:39.243415478Z" level=info msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" returns successfully" Sep 13 00:50:39.244341 env[1756]: time="2025-09-13T00:50:39.244307096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbdlb,Uid:543e9814-38b9-4890-8c16-f362d4a3151e,Namespace:calico-system,Attempt:1,}" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.113 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.113 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" iface="eth0" netns="/var/run/netns/cni-36bb3eed-0113-5889-046b-755adb4606e8" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.113 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" iface="eth0" netns="/var/run/netns/cni-36bb3eed-0113-5889-046b-755adb4606e8" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.115 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" iface="eth0" netns="/var/run/netns/cni-36bb3eed-0113-5889-046b-755adb4606e8" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.115 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.115 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.230 [INFO][4549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.230 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.230 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.250 [WARNING][4549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.251 [INFO][4549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.290 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:39.323306 env[1756]: 2025-09-13 00:50:39.300 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:39.323306 env[1756]: time="2025-09-13T00:50:39.313205638Z" level=info msg="TearDown network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" successfully" Sep 13 00:50:39.323306 env[1756]: time="2025-09-13T00:50:39.313266614Z" level=info msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" returns successfully" Sep 13 00:50:39.323306 env[1756]: time="2025-09-13T00:50:39.314043202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnfhz,Uid:7dc04780-57ee-4fd0-a262-21a1cfd3d394,Namespace:calico-system,Attempt:1,}" Sep 13 00:50:39.319636 systemd[1]: run-netns-cni\x2d36bb3eed\x2d0113\x2d5889\x2d046b\x2d755adb4606e8.mount: Deactivated successfully. Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.099 [INFO][4517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.099 [INFO][4517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" iface="eth0" netns="/var/run/netns/cni-a579400a-5751-55ec-5deb-989d2b03991b" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.099 [INFO][4517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" iface="eth0" netns="/var/run/netns/cni-a579400a-5751-55ec-5deb-989d2b03991b" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.115 [INFO][4517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" iface="eth0" netns="/var/run/netns/cni-a579400a-5751-55ec-5deb-989d2b03991b" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.115 [INFO][4517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.115 [INFO][4517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.273 [INFO][4548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.273 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.290 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.298 [WARNING][4548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.298 [INFO][4548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.300 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:39.339310 env[1756]: 2025-09-13 00:50:39.309 [INFO][4517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:50:39.342155 env[1756]: time="2025-09-13T00:50:39.339568209Z" level=info msg="TearDown network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" successfully" Sep 13 00:50:39.342155 env[1756]: time="2025-09-13T00:50:39.339617394Z" level=info msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" returns successfully" Sep 13 00:50:39.343279 kubelet[2691]: I0913 00:50:39.335370 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cbb85f794-lvdgm" podStartSLOduration=2.032664377 podStartE2EDuration="6.32872599s" podCreationTimestamp="2025-09-13 00:50:33 +0000 UTC" firstStartedPulling="2025-09-13 00:50:34.058455468 +0000 UTC m=+42.366084413" lastFinishedPulling="2025-09-13 00:50:38.354517083 +0000 UTC m=+46.662146026" observedRunningTime="2025-09-13 00:50:39.325525231 +0000 UTC m=+47.633154182" watchObservedRunningTime="2025-09-13 00:50:39.32872599 +0000 UTC m=+47.636354944" Sep 13 00:50:39.347502 env[1756]: time="2025-09-13T00:50:39.347453643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5775d679df-x29l7,Uid:bab504dd-aec7-4945-b513-319b96cc26d8,Namespace:calico-system,Attempt:1,}" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.168 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.168 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" iface="eth0" netns="/var/run/netns/cni-77e589a4-7e32-83b8-29fb-0135797cc1a0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.169 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" iface="eth0" netns="/var/run/netns/cni-77e589a4-7e32-83b8-29fb-0135797cc1a0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.169 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" iface="eth0" netns="/var/run/netns/cni-77e589a4-7e32-83b8-29fb-0135797cc1a0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.169 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.169 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.296 [INFO][4561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.297 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.300 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.326 [WARNING][4561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.326 [INFO][4561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.339 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:39.364362 env[1756]: 2025-09-13 00:50:39.361 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:39.368292 env[1756]: time="2025-09-13T00:50:39.364505048Z" level=info msg="TearDown network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" successfully" Sep 13 00:50:39.368292 env[1756]: time="2025-09-13T00:50:39.364575979Z" level=info msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" returns successfully" Sep 13 00:50:39.368292 env[1756]: time="2025-09-13T00:50:39.367353542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-p4282,Uid:71d9264d-8131-4edf-9956-3d6532ed3b91,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:50:39.470000 audit[4615]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=4615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:39.472722 kernel: kauditd_printk_skb: 547 callbacks suppressed Sep 13 00:50:39.472823 kernel: audit: type=1325 audit(1757724639.470:408): table=filter:105 family=2 entries=19 op=nft_register_rule pid=4615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:39.470000 audit[4615]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd9a553090 a2=0 a3=7ffd9a55307c items=0 ppid=2797 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:39.482536 kernel: audit: type=1300 audit(1757724639.470:408): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd9a553090 a2=0 a3=7ffd9a55307c items=0 ppid=2797 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:39.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:39.490468 kernel: audit: type=1327 audit(1757724639.470:408): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:39.490606 kernel: audit: type=1325 audit(1757724639.482:409): table=nat:106 family=2 entries=21 op=nft_register_chain pid=4615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:39.482000 audit[4615]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=4615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:39.482000 audit[4615]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffd9a553090 a2=0 a3=7ffd9a55307c items=0 ppid=2797 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:39.500962 kernel: audit: type=1300 audit(1757724639.482:409): arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffd9a553090 a2=0 a3=7ffd9a55307c items=0 ppid=2797 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:39.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:39.507901 kernel: audit: type=1327 audit(1757724639.482:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:39.542385 systemd[1]: run-netns-cni\x2d77e589a4\x2d7e32\x2d83b8\x2d29fb\x2d0135797cc1a0.mount: Deactivated successfully. Sep 13 00:50:39.542600 systemd[1]: run-netns-cni\x2da579400a\x2d5751\x2d55ec\x2d5deb\x2d989d2b03991b.mount: Deactivated successfully. Sep 13 00:50:39.758974 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:39.759129 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibd2bb636d7a: link becomes ready Sep 13 00:50:39.766843 systemd-networkd[1433]: calibd2bb636d7a: Link UP Sep 13 00:50:39.767105 systemd-networkd[1433]: calibd2bb636d7a: Gained carrier Sep 13 00:50:39.771629 (udev-worker)[4652]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.456 [INFO][4570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0 csi-node-driver- calico-system 543e9814-38b9-4890-8c16-f362d4a3151e 917 0 2025-09-13 00:50:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-243 csi-node-driver-dbdlb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd2bb636d7a [] [] }} ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.457 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.664 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" HandleID="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.675 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" HandleID="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328f90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-243", "pod":"csi-node-driver-dbdlb", "timestamp":"2025-09-13 00:50:39.664713358 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.676 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.676 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.676 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.688 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.708 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.717 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.722 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.726 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.726 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.729 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5 Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.736 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.743 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.194/26] block=192.168.124.192/26 handle="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.743 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.194/26] handle="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" host="ip-172-31-30-243" Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.744 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:39.828949 env[1756]: 2025-09-13 00:50:39.745 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.194/26] IPv6=[] ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" HandleID="k8s-pod-network.83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.747 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543e9814-38b9-4890-8c16-f362d4a3151e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"csi-node-driver-dbdlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd2bb636d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.747 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.194/32] ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.747 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd2bb636d7a ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.760 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.769 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543e9814-38b9-4890-8c16-f362d4a3151e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5", Pod:"csi-node-driver-dbdlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd2bb636d7a", MAC:"76:3a:ab:4d:1e:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:39.829991 env[1756]: 2025-09-13 00:50:39.826 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5" Namespace="calico-system" Pod="csi-node-driver-dbdlb" WorkloadEndpoint="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:39.875117 env[1756]: time="2025-09-13T00:50:39.875036507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:39.875376 env[1756]: time="2025-09-13T00:50:39.875329891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:39.875515 env[1756]: time="2025-09-13T00:50:39.875488297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:39.892416 env[1756]: time="2025-09-13T00:50:39.892340837Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5 pid=4669 runtime=io.containerd.runc.v2 Sep 13 00:50:39.935074 env[1756]: time="2025-09-13T00:50:39.934408667Z" level=info msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" Sep 13 00:50:40.006203 systemd[1]: run-containerd-runc-k8s.io-83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5-runc.5weLzW.mount: Deactivated successfully. Sep 13 00:50:40.016910 kernel: audit: type=1325 audit(1757724640.009:410): table=filter:107 family=2 entries=36 op=nft_register_chain pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.009000 audit[4702]: NETFILTER_CFG table=filter:107 family=2 entries=36 op=nft_register_chain pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.031575 kernel: audit: type=1300 audit(1757724640.009:410): arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff7e0aab90 a2=0 a3=7fff7e0aab7c items=0 ppid=4202 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.009000 audit[4702]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff7e0aab90 a2=0 a3=7fff7e0aab7c items=0 ppid=4202 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.009000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:40.043901 kernel: audit: type=1327 audit(1757724640.009:410): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:40.049032 systemd-networkd[1433]: cali832bff5e98b: Link UP Sep 13 00:50:40.053918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali832bff5e98b: link becomes ready Sep 13 00:50:40.054146 systemd-networkd[1433]: cali832bff5e98b: Gained carrier Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.550 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0 goldmane-7988f88666- calico-system 7dc04780-57ee-4fd0-a262-21a1cfd3d394 919 0 2025-09-13 00:50:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-243 goldmane-7988f88666-xnfhz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali832bff5e98b [] [] }} ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.550 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.687 [INFO][4629] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" HandleID="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.687 [INFO][4629] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" HandleID="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd790), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-243", "pod":"goldmane-7988f88666-xnfhz", "timestamp":"2025-09-13 00:50:39.676999335 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.688 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.743 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.743 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.815 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.854 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.891 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.900 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.913 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.913 [INFO][4629] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.950 [INFO][4629] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.969 [INFO][4629] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.987 [INFO][4629] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.195/26] block=192.168.124.192/26 handle="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.987 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.195/26] handle="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" host="ip-172-31-30-243" Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.988 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:40.088748 env[1756]: 2025-09-13 00:50:39.988 [INFO][4629] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.195/26] IPv6=[] ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" HandleID="k8s-pod-network.cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.011 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7dc04780-57ee-4fd0-a262-21a1cfd3d394", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"goldmane-7988f88666-xnfhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali832bff5e98b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.011 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.195/32] ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.011 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali832bff5e98b ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.055 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.066 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7dc04780-57ee-4fd0-a262-21a1cfd3d394", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e", Pod:"goldmane-7988f88666-xnfhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali832bff5e98b", MAC:"32:9d:9d:29:8c:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.096174 env[1756]: 2025-09-13 00:50:40.083 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e" Namespace="calico-system" Pod="goldmane-7988f88666-xnfhz" WorkloadEndpoint="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:40.127000 audit[4724]: NETFILTER_CFG table=filter:108 family=2 entries=48 op=nft_register_chain pid=4724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.135007 kernel: audit: type=1325 audit(1757724640.127:411): table=filter:108 family=2 entries=48 op=nft_register_chain pid=4724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.127000 audit[4724]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffc524ceb90 a2=0 a3=7ffc524ceb7c items=0 ppid=4202 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.127000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:40.162122 systemd-networkd[1433]: cali08f62cf9d8d: Link UP Sep 13 00:50:40.173241 systemd-networkd[1433]: cali08f62cf9d8d: Gained carrier Sep 13 00:50:40.173947 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali08f62cf9d8d: link becomes ready Sep 13 00:50:40.200787 env[1756]: time="2025-09-13T00:50:40.200732505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbdlb,Uid:543e9814-38b9-4890-8c16-f362d4a3151e,Namespace:calico-system,Attempt:1,} returns sandbox id \"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5\"" Sep 13 00:50:40.208254 env[1756]: time="2025-09-13T00:50:40.208209436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.611 [INFO][4591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0 calico-kube-controllers-5775d679df- calico-system bab504dd-aec7-4945-b513-319b96cc26d8 918 0 2025-09-13 00:50:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5775d679df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-243 calico-kube-controllers-5775d679df-x29l7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali08f62cf9d8d [] [] }} ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.612 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.699 [INFO][4636] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" HandleID="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.702 [INFO][4636] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" HandleID="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-243", "pod":"calico-kube-controllers-5775d679df-x29l7", "timestamp":"2025-09-13 00:50:39.699489032 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.702 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.987 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:39.988 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.053 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.075 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.095 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.115 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.119 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.119 [INFO][4636] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.122 [INFO][4636] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9 Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.139 [INFO][4636] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.148 [INFO][4636] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.196/26] block=192.168.124.192/26 handle="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.148 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.196/26] handle="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" host="ip-172-31-30-243" Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.150 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:40.209828 env[1756]: 2025-09-13 00:50:40.150 [INFO][4636] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.196/26] IPv6=[] ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" HandleID="k8s-pod-network.51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.153 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0", GenerateName:"calico-kube-controllers-5775d679df-", Namespace:"calico-system", SelfLink:"", UID:"bab504dd-aec7-4945-b513-319b96cc26d8", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5775d679df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"calico-kube-controllers-5775d679df-x29l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08f62cf9d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.154 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.196/32] ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.154 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08f62cf9d8d ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.174 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.183 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0", GenerateName:"calico-kube-controllers-5775d679df-", Namespace:"calico-system", SelfLink:"", UID:"bab504dd-aec7-4945-b513-319b96cc26d8", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5775d679df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9", Pod:"calico-kube-controllers-5775d679df-x29l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08f62cf9d8d", MAC:"92:53:43:d1:48:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.210892 env[1756]: 2025-09-13 00:50:40.198 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9" Namespace="calico-system" Pod="calico-kube-controllers-5775d679df-x29l7" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:50:40.253939 env[1756]: time="2025-09-13T00:50:40.253844937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:40.254241 env[1756]: time="2025-09-13T00:50:40.254209664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:40.254394 env[1756]: time="2025-09-13T00:50:40.254369206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:40.254764 env[1756]: time="2025-09-13T00:50:40.254714846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e pid=4749 runtime=io.containerd.runc.v2 Sep 13 00:50:40.286750 systemd-networkd[1433]: cali0b38c8fd994: Link UP Sep 13 00:50:40.303470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0b38c8fd994: link becomes ready Sep 13 00:50:40.295000 audit[4768]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=4768 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.295000 audit[4768]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd281540c0 a2=0 a3=7ffd281540ac items=0 ppid=4202 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.295000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.243:22-147.75.109.163:43518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:40.298278 systemd[1]: Started sshd@7-172.31.30.243:22-147.75.109.163:43518.service. Sep 13 00:50:40.304051 systemd-networkd[1433]: cali0b38c8fd994: Gained carrier Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:39.570 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0 calico-apiserver-5dcbb86cdd- calico-apiserver 71d9264d-8131-4edf-9956-3d6532ed3b91 920 0 2025-09-13 00:50:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcbb86cdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-243 calico-apiserver-5dcbb86cdd-p4282 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0b38c8fd994 [] [] }} ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:39.570 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:39.709 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" HandleID="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:39.709 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" HandleID="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-243", "pod":"calico-apiserver-5dcbb86cdd-p4282", "timestamp":"2025-09-13 00:50:39.709384591 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:39.709 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.150 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.150 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.162 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.190 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.213 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.217 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.222 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.222 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.226 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.249 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.262 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.197/26] block=192.168.124.192/26 handle="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.262 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.197/26] handle="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" host="ip-172-31-30-243" Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.262 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:40.331583 env[1756]: 2025-09-13 00:50:40.262 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.197/26] IPv6=[] ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" HandleID="k8s-pod-network.c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.265 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71d9264d-8131-4edf-9956-3d6532ed3b91", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"calico-apiserver-5dcbb86cdd-p4282", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b38c8fd994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.266 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.197/32] ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.266 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b38c8fd994 ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.304 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.304 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71d9264d-8131-4edf-9956-3d6532ed3b91", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed", Pod:"calico-apiserver-5dcbb86cdd-p4282", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b38c8fd994", MAC:"c2:f2:0c:4b:45:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.332746 env[1756]: 2025-09-13 00:50:40.323 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-p4282" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:40.403068 env[1756]: time="2025-09-13T00:50:40.402638996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:40.403068 env[1756]: time="2025-09-13T00:50:40.402716575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:40.403068 env[1756]: time="2025-09-13T00:50:40.402735669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:40.403717 env[1756]: time="2025-09-13T00:50:40.403298394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9 pid=4791 runtime=io.containerd.runc.v2 Sep 13 00:50:40.425000 audit[4799]: NETFILTER_CFG table=filter:110 family=2 entries=68 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.425000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7fff70b7c740 a2=0 a3=7fff70b7c72c items=0 ppid=4202 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.425000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:40.477493 env[1756]: time="2025-09-13T00:50:40.474069954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:40.477493 env[1756]: time="2025-09-13T00:50:40.474189297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:40.477493 env[1756]: time="2025-09-13T00:50:40.474223061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:40.477493 env[1756]: time="2025-09-13T00:50:40.474453282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed pid=4825 runtime=io.containerd.runc.v2 Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" iface="eth0" netns="/var/run/netns/cni-a2e45a9b-b65a-017d-d2b6-4be0da0c4d2c" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" iface="eth0" netns="/var/run/netns/cni-a2e45a9b-b65a-017d-d2b6-4be0da0c4d2c" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" iface="eth0" netns="/var/run/netns/cni-a2e45a9b-b65a-017d-d2b6-4be0da0c4d2c" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.406 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.505 [INFO][4804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.506 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.506 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.534 [WARNING][4804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.534 [INFO][4804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.537 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:40.551995 env[1756]: 2025-09-13 00:50:40.540 [INFO][4706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:40.560083 systemd[1]: run-netns-cni\x2da2e45a9b\x2db65a\x2d017d\x2dd2b6\x2d4be0da0c4d2c.mount: Deactivated successfully. Sep 13 00:50:40.565576 env[1756]: time="2025-09-13T00:50:40.565533879Z" level=info msg="TearDown network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" successfully" Sep 13 00:50:40.565716 env[1756]: time="2025-09-13T00:50:40.565698197Z" level=info msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" returns successfully" Sep 13 00:50:40.566961 env[1756]: time="2025-09-13T00:50:40.566924818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2frpj,Uid:fda33809-2b03-4521-bce2-be3153adfcec,Namespace:kube-system,Attempt:1,}" Sep 13 00:50:40.591587 env[1756]: time="2025-09-13T00:50:40.591543017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnfhz,Uid:7dc04780-57ee-4fd0-a262-21a1cfd3d394,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e\"" Sep 13 00:50:40.633000 audit[4770]: USER_ACCT pid=4770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:40.635364 sshd[4770]: Accepted publickey for core from 147.75.109.163 port 43518 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:40.638000 audit[4770]: CRED_ACQ pid=4770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:40.638000 audit[4770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1dc95d40 a2=3 a3=0 items=0 ppid=1 pid=4770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.638000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:50:40.643409 sshd[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:40.656503 env[1756]: time="2025-09-13T00:50:40.656465099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5775d679df-x29l7,Uid:bab504dd-aec7-4945-b513-319b96cc26d8,Namespace:calico-system,Attempt:1,} returns sandbox id \"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9\"" Sep 13 00:50:40.662093 systemd[1]: Started session-8.scope. Sep 13 00:50:40.662904 systemd-logind[1741]: New session 8 of user core. Sep 13 00:50:40.680000 audit[4770]: USER_START pid=4770 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:40.683000 audit[4901]: CRED_ACQ pid=4901 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:40.712037 env[1756]: time="2025-09-13T00:50:40.711986106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-p4282,Uid:71d9264d-8131-4edf-9956-3d6532ed3b91,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed\"" Sep 13 00:50:40.840123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:40.840258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibebe9621c50: link becomes ready Sep 13 00:50:40.839047 systemd-networkd[1433]: calibebe9621c50: Link UP Sep 13 00:50:40.842311 systemd-networkd[1433]: calibebe9621c50: Gained carrier Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.731 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0 coredns-7c65d6cfc9- kube-system fda33809-2b03-4521-bce2-be3153adfcec 976 0 2025-09-13 00:49:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-243 coredns-7c65d6cfc9-2frpj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibebe9621c50 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.731 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.766 [INFO][4910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" HandleID="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.767 [INFO][4910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" HandleID="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-243", "pod":"coredns-7c65d6cfc9-2frpj", "timestamp":"2025-09-13 00:50:40.766543245 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.767 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.767 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.767 [INFO][4910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.775 [INFO][4910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.784 [INFO][4910] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.798 [INFO][4910] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.802 [INFO][4910] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.807 [INFO][4910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.807 [INFO][4910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.810 [INFO][4910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883 Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.817 [INFO][4910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.829 [INFO][4910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.198/26] block=192.168.124.192/26 handle="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.830 [INFO][4910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.198/26] handle="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" host="ip-172-31-30-243" Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.830 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:40.869343 env[1756]: 2025-09-13 00:50:40.830 [INFO][4910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.198/26] IPv6=[] ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" HandleID="k8s-pod-network.4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.833 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fda33809-2b03-4521-bce2-be3153adfcec", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"coredns-7c65d6cfc9-2frpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebe9621c50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.833 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.198/32] ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.833 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibebe9621c50 ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.843 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.845 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fda33809-2b03-4521-bce2-be3153adfcec", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883", Pod:"coredns-7c65d6cfc9-2frpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebe9621c50", MAC:"6a:d5:dc:f8:5c:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:40.877450 env[1756]: 2025-09-13 00:50:40.866 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2frpj" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:40.899187 env[1756]: time="2025-09-13T00:50:40.898929588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:40.899187 env[1756]: time="2025-09-13T00:50:40.899016702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:40.899187 env[1756]: time="2025-09-13T00:50:40.899049816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:40.899492 env[1756]: time="2025-09-13T00:50:40.899226679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883 pid=4937 runtime=io.containerd.runc.v2 Sep 13 00:50:40.905279 env[1756]: time="2025-09-13T00:50:40.905223046Z" level=info msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" Sep 13 00:50:40.964000 audit[4973]: NETFILTER_CFG table=filter:111 family=2 entries=60 op=nft_register_chain pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:40.964000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=28952 a0=3 a1=7ffc337ad690 a2=0 a3=7ffc337ad67c items=0 ppid=4202 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:40.964000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:41.034107 env[1756]: time="2025-09-13T00:50:41.030112647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2frpj,Uid:fda33809-2b03-4521-bce2-be3153adfcec,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883\"" Sep 13 00:50:41.052351 env[1756]: time="2025-09-13T00:50:41.052307501Z" level=info msg="CreateContainer within sandbox \"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:50:41.107072 env[1756]: time="2025-09-13T00:50:41.104745501Z" level=info msg="CreateContainer within sandbox \"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3ca4dc9857afa2cb3a354f34cd4d912067090ec6a4578c7384f38c97fd7105d\"" Sep 13 00:50:41.110865 env[1756]: time="2025-09-13T00:50:41.110815292Z" level=info msg="StartContainer for \"b3ca4dc9857afa2cb3a354f34cd4d912067090ec6a4578c7384f38c97fd7105d\"" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.042 [INFO][4967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.043 [INFO][4967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" iface="eth0" netns="/var/run/netns/cni-f9ad9eb4-1087-59a9-61d1-c5be8cfa0014" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.044 [INFO][4967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" iface="eth0" netns="/var/run/netns/cni-f9ad9eb4-1087-59a9-61d1-c5be8cfa0014" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.046 [INFO][4967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" iface="eth0" netns="/var/run/netns/cni-f9ad9eb4-1087-59a9-61d1-c5be8cfa0014" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.046 [INFO][4967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.046 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.092 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.093 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.093 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.115 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.115 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.117 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:41.131698 env[1756]: 2025-09-13 00:50:41.122 [INFO][4967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:41.133410 env[1756]: time="2025-09-13T00:50:41.131932425Z" level=info msg="TearDown network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" successfully" Sep 13 00:50:41.133410 env[1756]: time="2025-09-13T00:50:41.131972806Z" level=info msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" returns successfully" Sep 13 00:50:41.216917 env[1756]: time="2025-09-13T00:50:41.215838816Z" level=info msg="StartContainer for \"b3ca4dc9857afa2cb3a354f34cd4d912067090ec6a4578c7384f38c97fd7105d\" returns successfully" Sep 13 00:50:41.240470 env[1756]: time="2025-09-13T00:50:41.240407028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nk5zv,Uid:5dfe4081-c1d4-427b-9b51-88e00048651f,Namespace:kube-system,Attempt:1,}" Sep 13 00:50:41.367735 systemd-networkd[1433]: calibd2bb636d7a: Gained IPv6LL Sep 13 00:50:41.504000 audit[5054]: NETFILTER_CFG table=filter:112 family=2 entries=18 op=nft_register_rule pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:41.504000 audit[5054]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde525b1c0 a2=0 a3=7ffde525b1ac items=0 ppid=2797 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:41.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:41.519000 audit[5054]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_rule pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:41.519000 audit[5054]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffde525b1c0 a2=0 a3=0 items=0 ppid=2797 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:41.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:41.532530 systemd[1]: run-netns-cni\x2df9ad9eb4\x2d1087\x2d59a9\x2d61d1\x2dc5be8cfa0014.mount: Deactivated successfully. Sep 13 00:50:41.614674 systemd-networkd[1433]: calib574b3b1eb5: Link UP Sep 13 00:50:41.626325 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib574b3b1eb5: link becomes ready Sep 13 00:50:41.625681 systemd-networkd[1433]: calib574b3b1eb5: Gained carrier Sep 13 00:50:41.646377 kubelet[2691]: I0913 00:50:41.645446 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2frpj" podStartSLOduration=45.64542004 podStartE2EDuration="45.64542004s" podCreationTimestamp="2025-09-13 00:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:50:41.48126225 +0000 UTC m=+49.788891205" watchObservedRunningTime="2025-09-13 00:50:41.64542004 +0000 UTC m=+49.953048990" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.331 [INFO][5030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0 coredns-7c65d6cfc9- kube-system 5dfe4081-c1d4-427b-9b51-88e00048651f 986 0 2025-09-13 00:49:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-243 coredns-7c65d6cfc9-nk5zv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib574b3b1eb5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.331 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.479 [INFO][5047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" HandleID="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.480 [INFO][5047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" HandleID="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b6230), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-243", "pod":"coredns-7c65d6cfc9-nk5zv", "timestamp":"2025-09-13 00:50:41.479372487 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.480 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.481 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.481 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.501 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.515 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.534 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.537 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.540 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.540 [INFO][5047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.548 [INFO][5047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771 Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.561 [INFO][5047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.582 [INFO][5047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.199/26] block=192.168.124.192/26 handle="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.582 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.199/26] handle="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" host="ip-172-31-30-243" Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.582 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:41.665728 env[1756]: 2025-09-13 00:50:41.582 [INFO][5047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.199/26] IPv6=[] ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" HandleID="k8s-pod-network.814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.597 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5dfe4081-c1d4-427b-9b51-88e00048651f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"coredns-7c65d6cfc9-nk5zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib574b3b1eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.597 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.199/32] ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.597 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib574b3b1eb5 ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.630 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.631 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5dfe4081-c1d4-427b-9b51-88e00048651f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771", Pod:"coredns-7c65d6cfc9-nk5zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib574b3b1eb5", MAC:"6e:9b:6b:7d:18:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:41.672177 env[1756]: 2025-09-13 00:50:41.651 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nk5zv" WorkloadEndpoint="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:41.703000 audit[5066]: NETFILTER_CFG table=filter:114 family=2 entries=36 op=nft_register_chain pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:41.703000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffe733b80e0 a2=0 a3=7ffe733b80cc items=0 ppid=4202 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:41.703000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:41.753259 env[1756]: time="2025-09-13T00:50:41.753163149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:41.753442 env[1756]: time="2025-09-13T00:50:41.753285207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:41.753442 env[1756]: time="2025-09-13T00:50:41.753333458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:41.753644 env[1756]: time="2025-09-13T00:50:41.753597819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771 pid=5074 runtime=io.containerd.runc.v2 Sep 13 00:50:41.814042 systemd-networkd[1433]: cali832bff5e98b: Gained IPv6LL Sep 13 00:50:41.880055 systemd-networkd[1433]: cali08f62cf9d8d: Gained IPv6LL Sep 13 00:50:41.880385 systemd-networkd[1433]: cali0b38c8fd994: Gained IPv6LL Sep 13 00:50:41.907903 env[1756]: time="2025-09-13T00:50:41.907391287Z" level=info msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" Sep 13 00:50:42.023624 env[1756]: time="2025-09-13T00:50:42.023326077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nk5zv,Uid:5dfe4081-c1d4-427b-9b51-88e00048651f,Namespace:kube-system,Attempt:1,} returns sandbox id \"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771\"" Sep 13 00:50:42.075832 env[1756]: time="2025-09-13T00:50:42.075785498Z" level=info msg="CreateContainer within sandbox \"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:50:42.080394 sshd[4770]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:42.092000 audit[4770]: USER_END pid=4770 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:42.092000 audit[4770]: CRED_DISP pid=4770 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:42.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.243:22-147.75.109.163:43518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:42.108594 systemd[1]: sshd@7-172.31.30.243:22-147.75.109.163:43518.service: Deactivated successfully. Sep 13 00:50:42.130092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731970776.mount: Deactivated successfully. Sep 13 00:50:42.133390 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:50:42.135090 systemd-logind[1741]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:50:42.140247 systemd-logind[1741]: Removed session 8. Sep 13 00:50:42.154311 env[1756]: time="2025-09-13T00:50:42.154268579Z" level=info msg="CreateContainer within sandbox \"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72c918923a1e9a342577ea118e685baec5ebfca3b31ef7a40ee7bce5c55f7195\"" Sep 13 00:50:42.155272 env[1756]: time="2025-09-13T00:50:42.155224675Z" level=info msg="StartContainer for \"72c918923a1e9a342577ea118e685baec5ebfca3b31ef7a40ee7bce5c55f7195\"" Sep 13 00:50:42.396862 env[1756]: time="2025-09-13T00:50:42.396765290Z" level=info msg="StartContainer for \"72c918923a1e9a342577ea118e685baec5ebfca3b31ef7a40ee7bce5c55f7195\" returns successfully" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.186 [INFO][5113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.186 [INFO][5113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" iface="eth0" netns="/var/run/netns/cni-26999272-8570-4417-926b-29be8aa512a6" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.190 [INFO][5113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" iface="eth0" netns="/var/run/netns/cni-26999272-8570-4417-926b-29be8aa512a6" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.190 [INFO][5113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" iface="eth0" netns="/var/run/netns/cni-26999272-8570-4417-926b-29be8aa512a6" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.190 [INFO][5113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.190 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.363 [INFO][5133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.364 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.364 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.393 [WARNING][5133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.393 [INFO][5133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.395 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:42.407098 env[1756]: 2025-09-13 00:50:42.400 [INFO][5113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:42.408523 env[1756]: time="2025-09-13T00:50:42.408476158Z" level=info msg="TearDown network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" successfully" Sep 13 00:50:42.408692 env[1756]: time="2025-09-13T00:50:42.408666210Z" level=info msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" returns successfully" Sep 13 00:50:42.409705 env[1756]: time="2025-09-13T00:50:42.409674316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-8mn66,Uid:3899ee84-120e-4dd4-9caa-e6d9f0157ae0,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:50:42.426262 env[1756]: time="2025-09-13T00:50:42.426227126Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:42.439786 env[1756]: time="2025-09-13T00:50:42.439744268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:42.463787 env[1756]: time="2025-09-13T00:50:42.463745241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:42.474191 env[1756]: time="2025-09-13T00:50:42.474141948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:42.475047 env[1756]: time="2025-09-13T00:50:42.475007997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:50:42.485314 env[1756]: time="2025-09-13T00:50:42.485269003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:50:42.539098 systemd[1]: run-netns-cni\x2d26999272\x2d8570\x2d4417\x2d926b\x2d29be8aa512a6.mount: Deactivated successfully. Sep 13 00:50:42.681019 env[1756]: time="2025-09-13T00:50:42.680965632Z" level=info msg="CreateContainer within sandbox \"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:50:42.765011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500932766.mount: Deactivated successfully. Sep 13 00:50:42.775523 systemd-networkd[1433]: calibebe9621c50: Gained IPv6LL Sep 13 00:50:42.793000 audit[5187]: NETFILTER_CFG table=filter:115 family=2 entries=18 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:42.793000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd482a9fc0 a2=0 a3=7ffd482a9fac items=0 ppid=2797 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:42.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:42.799000 audit[5187]: NETFILTER_CFG table=nat:116 family=2 entries=16 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:42.799000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffd482a9fc0 a2=0 a3=0 items=0 ppid=2797 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:42.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:42.813330 kubelet[2691]: I0913 00:50:42.813249 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nk5zv" podStartSLOduration=46.813224402 podStartE2EDuration="46.813224402s" podCreationTimestamp="2025-09-13 00:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:50:42.780485853 +0000 UTC m=+51.088114807" watchObservedRunningTime="2025-09-13 00:50:42.813224402 +0000 UTC m=+51.120853352" Sep 13 00:50:42.817172 env[1756]: time="2025-09-13T00:50:42.817094911Z" level=info msg="CreateContainer within sandbox \"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d0d4b6ccebb1b5d79fef79d4eae0a77d8fca7908b8d3cef3c652e9080d517f45\"" Sep 13 00:50:42.839143 env[1756]: time="2025-09-13T00:50:42.837836226Z" level=info msg="StartContainer for \"d0d4b6ccebb1b5d79fef79d4eae0a77d8fca7908b8d3cef3c652e9080d517f45\"" Sep 13 00:50:42.980000 audit[5209]: NETFILTER_CFG table=filter:117 family=2 entries=15 op=nft_register_rule pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:42.980000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd9c1c3db0 a2=0 a3=7ffd9c1c3d9c items=0 ppid=2797 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:42.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:42.989000 audit[5209]: NETFILTER_CFG table=nat:118 family=2 entries=37 op=nft_register_chain pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:42.989000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffd9c1c3db0 a2=0 a3=7ffd9c1c3d9c items=0 ppid=2797 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:42.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:43.020134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:43.020278 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali08cadfaa167: link becomes ready Sep 13 00:50:43.020491 systemd-networkd[1433]: cali08cadfaa167: Link UP Sep 13 00:50:43.020863 systemd-networkd[1433]: cali08cadfaa167: Gained carrier Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.609 [INFO][5166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0 calico-apiserver-5dcbb86cdd- calico-apiserver 3899ee84-120e-4dd4-9caa-e6d9f0157ae0 1005 0 2025-09-13 00:50:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcbb86cdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-243 calico-apiserver-5dcbb86cdd-8mn66 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08cadfaa167 [] [] }} ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.609 [INFO][5166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.852 [INFO][5182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" HandleID="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.852 [INFO][5182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" HandleID="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-243", "pod":"calico-apiserver-5dcbb86cdd-8mn66", "timestamp":"2025-09-13 00:50:42.85250314 +0000 UTC"}, Hostname:"ip-172-31-30-243", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.852 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.852 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.852 [INFO][5182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-243' Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.894 [INFO][5182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.914 [INFO][5182] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.931 [INFO][5182] ipam/ipam.go 511: Trying affinity for 192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.935 [INFO][5182] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.940 [INFO][5182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.940 [INFO][5182] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.957 [INFO][5182] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.972 [INFO][5182] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.993 [INFO][5182] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.124.200/26] block=192.168.124.192/26 handle="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.993 [INFO][5182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.200/26] handle="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" host="ip-172-31-30-243" Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.993 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:43.070553 env[1756]: 2025-09-13 00:50:42.994 [INFO][5182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.200/26] IPv6=[] ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" HandleID="k8s-pod-network.36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:42.997 [INFO][5166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3899ee84-120e-4dd4-9caa-e6d9f0157ae0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"", Pod:"calico-apiserver-5dcbb86cdd-8mn66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08cadfaa167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:42.997 [INFO][5166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.200/32] ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:42.997 [INFO][5166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08cadfaa167 ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:43.020 [INFO][5166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:43.028 [INFO][5166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3899ee84-120e-4dd4-9caa-e6d9f0157ae0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f", Pod:"calico-apiserver-5dcbb86cdd-8mn66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08cadfaa167", MAC:"56:fa:54:bb:b2:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:43.071807 env[1756]: 2025-09-13 00:50:43.067 [INFO][5166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f" Namespace="calico-apiserver" Pod="calico-apiserver-5dcbb86cdd-8mn66" WorkloadEndpoint="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:43.095187 env[1756]: time="2025-09-13T00:50:43.095137432Z" level=info msg="StartContainer for \"d0d4b6ccebb1b5d79fef79d4eae0a77d8fca7908b8d3cef3c652e9080d517f45\" returns successfully" Sep 13 00:50:43.135000 audit[5247]: NETFILTER_CFG table=filter:119 family=2 entries=41 op=nft_register_chain pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:50:43.137096 env[1756]: time="2025-09-13T00:50:43.133786016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:43.137096 env[1756]: time="2025-09-13T00:50:43.133855175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:43.137096 env[1756]: time="2025-09-13T00:50:43.133919158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:43.137096 env[1756]: time="2025-09-13T00:50:43.134111716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f pid=5241 runtime=io.containerd.runc.v2 Sep 13 00:50:43.135000 audit[5247]: SYSCALL arch=c000003e syscall=46 success=yes exit=23096 a0=3 a1=7ffd25cbe2f0 a2=0 a3=7ffd25cbe2dc items=0 ppid=4202 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:43.135000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:50:43.218084 env[1756]: time="2025-09-13T00:50:43.218040333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcbb86cdd-8mn66,Uid:3899ee84-120e-4dd4-9caa-e6d9f0157ae0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f\"" Sep 13 00:50:43.533860 systemd[1]: run-containerd-runc-k8s.io-d0d4b6ccebb1b5d79fef79d4eae0a77d8fca7908b8d3cef3c652e9080d517f45-runc.36B5en.mount: Deactivated successfully. Sep 13 00:50:43.542375 systemd-networkd[1433]: calib574b3b1eb5: Gained IPv6LL Sep 13 00:50:44.094000 audit[5278]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:44.094000 audit[5278]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe68b9a050 a2=0 a3=7ffe68b9a03c items=0 ppid=2797 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:44.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:44.229000 audit[5278]: NETFILTER_CFG table=nat:121 family=2 entries=58 op=nft_register_chain pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:44.229000 audit[5278]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffe68b9a050 a2=0 a3=7ffe68b9a03c items=0 ppid=2797 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:44.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:44.693112 systemd-networkd[1433]: cali08cadfaa167: Gained IPv6LL Sep 13 00:50:45.418463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154860661.mount: Deactivated successfully. Sep 13 00:50:46.411771 env[1756]: time="2025-09-13T00:50:46.411612163Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:46.416831 env[1756]: time="2025-09-13T00:50:46.416795553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:46.420967 env[1756]: time="2025-09-13T00:50:46.420926782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:46.424989 env[1756]: time="2025-09-13T00:50:46.424952846Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:46.426105 env[1756]: time="2025-09-13T00:50:46.426072288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:50:46.441587 env[1756]: time="2025-09-13T00:50:46.441313246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:50:46.522771 env[1756]: time="2025-09-13T00:50:46.522724158Z" level=info msg="CreateContainer within sandbox \"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:50:46.575316 env[1756]: time="2025-09-13T00:50:46.575237002Z" level=info msg="CreateContainer within sandbox \"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb\"" Sep 13 00:50:46.583253 env[1756]: time="2025-09-13T00:50:46.583161830Z" level=info msg="StartContainer for \"4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb\"" Sep 13 00:50:46.618295 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.wgdi7n.mount: Deactivated successfully. Sep 13 00:50:46.699711 env[1756]: time="2025-09-13T00:50:46.699651051Z" level=info msg="StartContainer for \"4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb\" returns successfully" Sep 13 00:50:46.829328 kernel: kauditd_printk_skb: 52 callbacks suppressed Sep 13 00:50:46.839890 kernel: audit: type=1325 audit(1757724646.823:434): table=filter:122 family=2 entries=12 op=nft_register_rule pid=5322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:46.840763 kernel: audit: type=1300 audit(1757724646.823:434): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd45979010 a2=0 a3=7ffd45978ffc items=0 ppid=2797 pid=5322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:46.841950 kernel: audit: type=1327 audit(1757724646.823:434): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:46.823000 audit[5322]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=5322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:46.823000 audit[5322]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd45979010 a2=0 a3=7ffd45978ffc items=0 ppid=2797 pid=5322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:46.846114 kernel: audit: type=1325 audit(1757724646.838:435): table=nat:123 family=2 entries=22 op=nft_register_rule pid=5322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:46.852547 kernel: audit: type=1300 audit(1757724646.838:435): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd45979010 a2=0 a3=7ffd45978ffc items=0 ppid=2797 pid=5322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:46.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:46.838000 audit[5322]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:46.838000 audit[5322]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd45979010 a2=0 a3=7ffd45978ffc items=0 ppid=2797 pid=5322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:46.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:46.860157 kernel: audit: type=1327 audit(1757724646.838:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:46.898106 kubelet[2691]: I0913 00:50:46.863254 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-xnfhz" podStartSLOduration=27.987992561 podStartE2EDuration="33.830895857s" podCreationTimestamp="2025-09-13 00:50:13 +0000 UTC" firstStartedPulling="2025-09-13 00:50:40.593386377 +0000 UTC m=+48.901015320" lastFinishedPulling="2025-09-13 00:50:46.436289673 +0000 UTC m=+54.743918616" observedRunningTime="2025-09-13 00:50:46.830254227 +0000 UTC m=+55.137883178" watchObservedRunningTime="2025-09-13 00:50:46.830895857 +0000 UTC m=+55.138524809" Sep 13 00:50:47.118019 systemd[1]: Started sshd@8-172.31.30.243:22-147.75.109.163:43528.service. Sep 13 00:50:47.126137 kernel: audit: type=1130 audit(1757724647.118:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.243:22-147.75.109.163:43528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:47.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.243:22-147.75.109.163:43528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:47.405000 audit[5343]: USER_ACCT pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:47.412824 sshd[5343]: Accepted publickey for core from 147.75.109.163 port 43528 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:47.413497 kernel: audit: type=1101 audit(1757724647.405:437): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:47.412000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:47.422718 kernel: audit: type=1103 audit(1757724647.412:438): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:47.422851 kernel: audit: type=1006 audit(1757724647.412:439): pid=5343 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:50:47.412000 audit[5343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe6fbbd00 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:47.412000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:50:47.424720 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:47.447665 systemd[1]: Started session-9.scope. Sep 13 00:50:47.448100 systemd-logind[1741]: New session 9 of user core. Sep 13 00:50:47.455000 audit[5343]: USER_START pid=5343 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:47.458000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:48.464267 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.GsQlDQ.mount: Deactivated successfully. Sep 13 00:50:49.138098 sshd[5343]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:49.139000 audit[5343]: USER_END pid=5343 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:49.139000 audit[5343]: CRED_DISP pid=5343 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:49.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.243:22-147.75.109.163:43528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:49.143901 systemd[1]: sshd@8-172.31.30.243:22-147.75.109.163:43528.service: Deactivated successfully. Sep 13 00:50:49.146517 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:50:49.148392 systemd-logind[1741]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:50:49.151998 systemd-logind[1741]: Removed session 9. Sep 13 00:50:49.308193 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.3ED1EC.mount: Deactivated successfully. Sep 13 00:50:50.422010 env[1756]: time="2025-09-13T00:50:50.421779127Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:50.426377 env[1756]: time="2025-09-13T00:50:50.426334146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:50.429499 env[1756]: time="2025-09-13T00:50:50.429458814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:50.432408 env[1756]: time="2025-09-13T00:50:50.432370997Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:50.433158 env[1756]: time="2025-09-13T00:50:50.433120986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:50:50.497738 env[1756]: time="2025-09-13T00:50:50.497445118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:50:50.605465 env[1756]: time="2025-09-13T00:50:50.605401926Z" level=info msg="CreateContainer within sandbox \"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:50:50.630898 env[1756]: time="2025-09-13T00:50:50.629567536Z" level=info msg="CreateContainer within sandbox \"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0b43aa472bcdd6752fed28bff67aea7d7edc20782dde5ec3c5810fe77b8860a\"" Sep 13 00:50:50.637252 env[1756]: time="2025-09-13T00:50:50.637217444Z" level=info msg="StartContainer for \"c0b43aa472bcdd6752fed28bff67aea7d7edc20782dde5ec3c5810fe77b8860a\"" Sep 13 00:50:50.747155 env[1756]: time="2025-09-13T00:50:50.747043705Z" level=info msg="StartContainer for \"c0b43aa472bcdd6752fed28bff67aea7d7edc20782dde5ec3c5810fe77b8860a\" returns successfully" Sep 13 00:50:51.318474 kubelet[2691]: I0913 00:50:51.318253 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5775d679df-x29l7" podStartSLOduration=27.532014089 podStartE2EDuration="37.307400369s" podCreationTimestamp="2025-09-13 00:50:14 +0000 UTC" firstStartedPulling="2025-09-13 00:50:40.671455327 +0000 UTC m=+48.979084260" lastFinishedPulling="2025-09-13 00:50:50.446841568 +0000 UTC m=+58.754470540" observedRunningTime="2025-09-13 00:50:51.288061927 +0000 UTC m=+59.595690880" watchObservedRunningTime="2025-09-13 00:50:51.307400369 +0000 UTC m=+59.615029320" Sep 13 00:50:51.968763 env[1756]: time="2025-09-13T00:50:51.968674249Z" level=info msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.511 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7dc04780-57ee-4fd0-a262-21a1cfd3d394", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e", Pod:"goldmane-7988f88666-xnfhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali832bff5e98b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.530 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.530 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" iface="eth0" netns="" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.530 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.530 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.979 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.982 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.983 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.999 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:52.999 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:53.003 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:53.010904 env[1756]: 2025-09-13 00:50:53.007 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.015126 env[1756]: time="2025-09-13T00:50:53.013469134Z" level=info msg="TearDown network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" successfully" Sep 13 00:50:53.015126 env[1756]: time="2025-09-13T00:50:53.013514764Z" level=info msg="StopPodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" returns successfully" Sep 13 00:50:53.100482 env[1756]: time="2025-09-13T00:50:53.100176077Z" level=info msg="RemovePodSandbox for \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" Sep 13 00:50:53.100482 env[1756]: time="2025-09-13T00:50:53.100224890Z" level=info msg="Forcibly stopping sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\"" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.301 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7dc04780-57ee-4fd0-a262-21a1cfd3d394", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"cf450bed3dfb667b6aa3ca0642575a51d683a855e31a0ae53bfdd9894e28a88e", Pod:"goldmane-7988f88666-xnfhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali832bff5e98b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.301 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.301 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" iface="eth0" netns="" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.301 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.302 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.353 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.354 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.354 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.364 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.364 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" HandleID="k8s-pod-network.ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Workload="ip--172--31--30--243-k8s-goldmane--7988f88666--xnfhz-eth0" Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.371 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:53.380219 env[1756]: 2025-09-13 00:50:53.376 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a" Sep 13 00:50:53.380219 env[1756]: time="2025-09-13T00:50:53.379255903Z" level=info msg="TearDown network for sandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" successfully" Sep 13 00:50:53.391102 env[1756]: time="2025-09-13T00:50:53.391047088Z" level=info msg="RemovePodSandbox \"ca94de4d946974b154df8079a22d2ae11643535754ad9490a83f513fcc70af8a\" returns successfully" Sep 13 00:50:53.392292 env[1756]: time="2025-09-13T00:50:53.392249757Z" level=info msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.484 [WARNING][5519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3899ee84-120e-4dd4-9caa-e6d9f0157ae0", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f", Pod:"calico-apiserver-5dcbb86cdd-8mn66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08cadfaa167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.484 [INFO][5519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.484 [INFO][5519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" iface="eth0" netns="" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.484 [INFO][5519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.484 [INFO][5519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.561 [INFO][5526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.561 [INFO][5526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.562 [INFO][5526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.571 [WARNING][5526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.571 [INFO][5526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.576 [INFO][5526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:53.581769 env[1756]: 2025-09-13 00:50:53.579 [INFO][5519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.583971 env[1756]: time="2025-09-13T00:50:53.581866089Z" level=info msg="TearDown network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" successfully" Sep 13 00:50:53.583971 env[1756]: time="2025-09-13T00:50:53.581926344Z" level=info msg="StopPodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" returns successfully" Sep 13 00:50:53.592965 env[1756]: time="2025-09-13T00:50:53.592542536Z" level=info msg="RemovePodSandbox for \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" Sep 13 00:50:53.592965 env[1756]: time="2025-09-13T00:50:53.592590358Z" level=info msg="Forcibly stopping sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\"" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.709 [WARNING][5540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3899ee84-120e-4dd4-9caa-e6d9f0157ae0", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f", Pod:"calico-apiserver-5dcbb86cdd-8mn66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08cadfaa167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.710 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.710 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" iface="eth0" netns="" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.710 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.710 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.769 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.769 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.769 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.777 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.777 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" HandleID="k8s-pod-network.514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--8mn66-eth0" Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.780 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:53.785703 env[1756]: 2025-09-13 00:50:53.783 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06" Sep 13 00:50:53.788499 env[1756]: time="2025-09-13T00:50:53.785733742Z" level=info msg="TearDown network for sandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" successfully" Sep 13 00:50:53.792761 env[1756]: time="2025-09-13T00:50:53.792713095Z" level=info msg="RemovePodSandbox \"514bb50e80605abc14ffbf3a4446de6587894f889a320de0f53ef3d5e0d17f06\" returns successfully" Sep 13 00:50:53.793420 env[1756]: time="2025-09-13T00:50:53.793388796Z" level=info msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.864 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71d9264d-8131-4edf-9956-3d6532ed3b91", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed", Pod:"calico-apiserver-5dcbb86cdd-p4282", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b38c8fd994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.864 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.864 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" iface="eth0" netns="" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.864 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.864 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.916 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.916 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.916 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.929 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.929 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.932 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:53.938021 env[1756]: 2025-09-13 00:50:53.935 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:53.940319 env[1756]: time="2025-09-13T00:50:53.938245883Z" level=info msg="TearDown network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" successfully" Sep 13 00:50:53.940319 env[1756]: time="2025-09-13T00:50:53.938284274Z" level=info msg="StopPodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" returns successfully" Sep 13 00:50:53.940319 env[1756]: time="2025-09-13T00:50:53.938743440Z" level=info msg="RemovePodSandbox for \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" Sep 13 00:50:53.940319 env[1756]: time="2025-09-13T00:50:53.939138879Z" level=info msg="Forcibly stopping sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\"" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.023 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0", GenerateName:"calico-apiserver-5dcbb86cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71d9264d-8131-4edf-9956-3d6532ed3b91", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcbb86cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed", Pod:"calico-apiserver-5dcbb86cdd-p4282", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b38c8fd994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.023 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.023 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" iface="eth0" netns="" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.023 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.023 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.086 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.086 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.086 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.096 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.096 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" HandleID="k8s-pod-network.bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Workload="ip--172--31--30--243-k8s-calico--apiserver--5dcbb86cdd--p4282-eth0" Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.098 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:54.105594 env[1756]: 2025-09-13 00:50:54.101 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a" Sep 13 00:50:54.105594 env[1756]: time="2025-09-13T00:50:54.104340016Z" level=info msg="TearDown network for sandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" successfully" Sep 13 00:50:54.111585 env[1756]: time="2025-09-13T00:50:54.111516959Z" level=info msg="RemovePodSandbox \"bef9f6cf029ec6c5f5167c94d943edcac39730b32ec3657f4af147f3ac66912a\" returns successfully" Sep 13 00:50:54.112272 env[1756]: time="2025-09-13T00:50:54.112133501Z" level=info msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" Sep 13 00:50:54.178830 systemd[1]: Started sshd@9-172.31.30.243:22-147.75.109.163:32962.service. Sep 13 00:50:54.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.243:22-147.75.109.163:32962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:54.187186 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:50:54.188699 kernel: audit: type=1130 audit(1757724654.179:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.243:22-147.75.109.163:32962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.210 [WARNING][5608] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5dfe4081-c1d4-427b-9b51-88e00048651f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771", Pod:"coredns-7c65d6cfc9-nk5zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib574b3b1eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.210 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.210 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" iface="eth0" netns="" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.210 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.210 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.296 [INFO][5617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.296 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.304 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.312 [WARNING][5617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.312 [INFO][5617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.314 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:54.319785 env[1756]: 2025-09-13 00:50:54.316 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.320652 env[1756]: time="2025-09-13T00:50:54.320612735Z" level=info msg="TearDown network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" successfully" Sep 13 00:50:54.320741 env[1756]: time="2025-09-13T00:50:54.320726125Z" level=info msg="StopPodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" returns successfully" Sep 13 00:50:54.322367 env[1756]: time="2025-09-13T00:50:54.321278194Z" level=info msg="RemovePodSandbox for \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" Sep 13 00:50:54.322531 env[1756]: time="2025-09-13T00:50:54.322490748Z" level=info msg="Forcibly stopping sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\"" Sep 13 00:50:54.339069 env[1756]: time="2025-09-13T00:50:54.338224169Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:54.340235 env[1756]: time="2025-09-13T00:50:54.340205963Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:54.342076 env[1756]: time="2025-09-13T00:50:54.340904099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:54.342383 env[1756]: time="2025-09-13T00:50:54.342354377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:54.343206 env[1756]: time="2025-09-13T00:50:54.343169855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:50:54.359675 env[1756]: time="2025-09-13T00:50:54.357792313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:50:54.422945 env[1756]: time="2025-09-13T00:50:54.422900880Z" level=info msg="CreateContainer within sandbox \"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:50:54.455247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208183399.mount: Deactivated successfully. Sep 13 00:50:54.458146 env[1756]: time="2025-09-13T00:50:54.457948836Z" level=info msg="CreateContainer within sandbox \"c1d8dab801d8468181784d017ff2ec3c100d762bb65b1c68707d05b39e7aa2ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df5b4fbe7cb8bc8a61900afcc7cea7bcd38ff724326dde9239d1cb51656430da\"" Sep 13 00:50:54.461024 env[1756]: time="2025-09-13T00:50:54.460941018Z" level=info msg="StartContainer for \"df5b4fbe7cb8bc8a61900afcc7cea7bcd38ff724326dde9239d1cb51656430da\"" Sep 13 00:50:54.525984 kernel: audit: type=1101 audit(1757724654.495:446): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.495000 audit[5612]: USER_ACCT pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.533321 sshd[5612]: Accepted publickey for core from 147.75.109.163 port 32962 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:54.549489 kernel: audit: type=1103 audit(1757724654.509:447): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.509000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.537389 systemd[1]: run-containerd-runc-k8s.io-df5b4fbe7cb8bc8a61900afcc7cea7bcd38ff724326dde9239d1cb51656430da-runc.Pd9CpR.mount: Deactivated successfully. Sep 13 00:50:54.534221 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:54.570100 kernel: audit: type=1006 audit(1757724654.509:448): pid=5612 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:50:54.570210 kernel: audit: type=1300 audit(1757724654.509:448): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4f35fe10 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:54.509000 audit[5612]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4f35fe10 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:54.579466 kernel: audit: type=1327 audit(1757724654.509:448): proctitle=737368643A20636F7265205B707269765D Sep 13 00:50:54.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:50:54.594492 systemd[1]: Started session-10.scope. Sep 13 00:50:54.595488 systemd-logind[1741]: New session 10 of user core. Sep 13 00:50:54.623000 audit[5612]: USER_START pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.636019 kernel: audit: type=1105 audit(1757724654.623:449): pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.424 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5dfe4081-c1d4-427b-9b51-88e00048651f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"814da9bb508e8e9e8faa87fdda45010c9d4ef7ed54cff13143bb049361a55771", Pod:"coredns-7c65d6cfc9-nk5zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib574b3b1eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.424 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.425 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" iface="eth0" netns="" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.425 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.425 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.488 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.488 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.488 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.557 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.557 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" HandleID="k8s-pod-network.74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--nk5zv-eth0" Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.580 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:54.641243 env[1756]: 2025-09-13 00:50:54.615 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807" Sep 13 00:50:54.644284 env[1756]: time="2025-09-13T00:50:54.642146795Z" level=info msg="TearDown network for sandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" successfully" Sep 13 00:50:54.623000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.651905 kernel: audit: type=1103 audit(1757724654.623:450): pid=5673 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:54.662087 env[1756]: time="2025-09-13T00:50:54.662053530Z" level=info msg="RemovePodSandbox \"74236a0e7e38acf0856d2940df87f58b728099782b5f9afa0907089c8aea3807\" returns successfully" Sep 13 00:50:54.662657 env[1756]: time="2025-09-13T00:50:54.662633500Z" level=info msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" Sep 13 00:50:54.691127 env[1756]: time="2025-09-13T00:50:54.691090182Z" level=info msg="StartContainer for \"df5b4fbe7cb8bc8a61900afcc7cea7bcd38ff724326dde9239d1cb51656430da\" returns successfully" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.734 [WARNING][5689] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fda33809-2b03-4521-bce2-be3153adfcec", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883", Pod:"coredns-7c65d6cfc9-2frpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebe9621c50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.734 [INFO][5689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.734 [INFO][5689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" iface="eth0" netns="" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.734 [INFO][5689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.734 [INFO][5689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.767 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.768 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.768 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.780 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.780 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.785 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:54.793032 env[1756]: 2025-09-13 00:50:54.788 [INFO][5689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:54.795411 env[1756]: time="2025-09-13T00:50:54.794279097Z" level=info msg="TearDown network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" successfully" Sep 13 00:50:54.795411 env[1756]: time="2025-09-13T00:50:54.794313466Z" level=info msg="StopPodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" returns successfully" Sep 13 00:50:54.818979 env[1756]: time="2025-09-13T00:50:54.818937370Z" level=info msg="RemovePodSandbox for \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" Sep 13 00:50:54.819132 env[1756]: time="2025-09-13T00:50:54.818983257Z" level=info msg="Forcibly stopping sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\"" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:54.958 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fda33809-2b03-4521-bce2-be3153adfcec", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"4ceef4c4a3607df654ba9c985dec616a9a1094af959edbfea31ed3f9ec583883", Pod:"coredns-7c65d6cfc9-2frpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebe9621c50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:54.958 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:54.958 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" iface="eth0" netns="" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:54.958 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:54.958 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.024 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.024 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.025 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.035 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.035 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" HandleID="k8s-pod-network.563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Workload="ip--172--31--30--243-k8s-coredns--7c65d6cfc9--2frpj-eth0" Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.039 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:55.052373 env[1756]: 2025-09-13 00:50:55.047 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4" Sep 13 00:50:55.052373 env[1756]: time="2025-09-13T00:50:55.051834978Z" level=info msg="TearDown network for sandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" successfully" Sep 13 00:50:55.065445 env[1756]: time="2025-09-13T00:50:55.064803872Z" level=info msg="RemovePodSandbox \"563853b5484fdd1d5c521e9ec2540dc2a81c8a807fed3f3f348939ccb66611d4\" returns successfully" Sep 13 00:50:56.728734 env[1756]: time="2025-09-13T00:50:56.728697705Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:56.738193 env[1756]: time="2025-09-13T00:50:56.735695693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:56.742158 env[1756]: time="2025-09-13T00:50:56.742123049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:56.745655 env[1756]: time="2025-09-13T00:50:56.745622915Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:56.746991 env[1756]: time="2025-09-13T00:50:56.746620696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:50:56.801516 env[1756]: time="2025-09-13T00:50:56.801467799Z" level=info msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" Sep 13 00:50:56.839248 sshd[5612]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:56.873828 kernel: audit: type=1106 audit(1757724656.851:451): pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:56.875311 kernel: audit: type=1104 audit(1757724656.861:452): pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:56.851000 audit[5612]: USER_END pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:56.861000 audit[5612]: CRED_DISP pid=5612 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:56.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.243:22-147.75.109.163:32962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.896843 systemd[1]: sshd@9-172.31.30.243:22-147.75.109.163:32962.service: Deactivated successfully. Sep 13 00:50:56.909520 systemd-logind[1741]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:50:56.915279 systemd[1]: Started sshd@10-172.31.30.243:22-147.75.109.163:32978.service. Sep 13 00:50:56.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.243:22-147.75.109.163:32978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.919167 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:50:56.922410 systemd-logind[1741]: Removed session 10. Sep 13 00:50:57.046920 env[1756]: time="2025-09-13T00:50:57.046765044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:50:57.157000 audit[5793]: USER_ACCT pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:57.160000 audit[5793]: CRED_ACQ pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:57.162443 sshd[5793]: Accepted publickey for core from 147.75.109.163 port 32978 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:57.160000 audit[5793]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefb7d4030 a2=3 a3=0 items=0 ppid=1 pid=5793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:57.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:50:57.167067 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:57.192748 systemd[1]: Started session-11.scope. Sep 13 00:50:57.194620 systemd-logind[1741]: New session 11 of user core. Sep 13 00:50:57.203000 audit[5793]: USER_START pid=5793 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:57.208000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:57.295005 env[1756]: time="2025-09-13T00:50:57.294959271Z" level=info msg="CreateContainer within sandbox \"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:50:57.344526 env[1756]: time="2025-09-13T00:50:57.344419707Z" level=info msg="CreateContainer within sandbox \"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"edfbe28e72126b2c46db8b3dc443e5e95e08324ed0cff5d0e07ac4f60360d958\"" Sep 13 00:50:57.355450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942655122.mount: Deactivated successfully. Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.328 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543e9814-38b9-4890-8c16-f362d4a3151e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5", Pod:"csi-node-driver-dbdlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd2bb636d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.339 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.339 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" iface="eth0" netns="" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.339 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.339 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.542 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.550 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.551 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.584 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.585 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.592 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:57.601716 env[1756]: 2025-09-13 00:50:57.596 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:57.603004 env[1756]: time="2025-09-13T00:50:57.602962252Z" level=info msg="TearDown network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" successfully" Sep 13 00:50:57.603133 env[1756]: time="2025-09-13T00:50:57.603113068Z" level=info msg="StopPodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" returns successfully" Sep 13 00:50:57.642754 env[1756]: time="2025-09-13T00:50:57.642711120Z" level=info msg="StartContainer for \"edfbe28e72126b2c46db8b3dc443e5e95e08324ed0cff5d0e07ac4f60360d958\"" Sep 13 00:50:57.760730 env[1756]: time="2025-09-13T00:50:57.760680251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:57.768051 env[1756]: time="2025-09-13T00:50:57.766828005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:57.774698 env[1756]: time="2025-09-13T00:50:57.774657433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:57.777651 env[1756]: time="2025-09-13T00:50:57.777478681Z" level=info msg="StartContainer for \"edfbe28e72126b2c46db8b3dc443e5e95e08324ed0cff5d0e07ac4f60360d958\" returns successfully" Sep 13 00:50:57.779242 env[1756]: time="2025-09-13T00:50:57.779209326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:57.780560 env[1756]: time="2025-09-13T00:50:57.780533034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:50:58.075283 env[1756]: time="2025-09-13T00:50:58.074700936Z" level=info msg="RemovePodSandbox for \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" Sep 13 00:50:58.075283 env[1756]: time="2025-09-13T00:50:58.074735981Z" level=info msg="Forcibly stopping sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\"" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.176 [WARNING][5872] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543e9814-38b9-4890-8c16-f362d4a3151e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"83324120d4ddc4af0cf192b52bb56996bae631941376d25d4b66e5289be217a5", Pod:"csi-node-driver-dbdlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd2bb636d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.176 [INFO][5872] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.176 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" iface="eth0" netns="" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.176 [INFO][5872] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.176 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.465 [INFO][5879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.468 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.468 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.481 [WARNING][5879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.481 [INFO][5879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" HandleID="k8s-pod-network.b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Workload="ip--172--31--30--243-k8s-csi--node--driver--dbdlb-eth0" Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.483 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:58.490954 env[1756]: 2025-09-13 00:50:58.487 [INFO][5872] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa" Sep 13 00:50:58.493511 env[1756]: time="2025-09-13T00:50:58.491388950Z" level=info msg="TearDown network for sandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" successfully" Sep 13 00:50:58.502360 env[1756]: time="2025-09-13T00:50:58.502309544Z" level=info msg="RemovePodSandbox \"b60ba222891b5fd30d4fbc3d7424aab6551c624a039bb5531ef3ec37c95d4baa\" returns successfully" Sep 13 00:50:58.543525 env[1756]: time="2025-09-13T00:50:58.543416407Z" level=info msg="CreateContainer within sandbox \"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:50:58.573046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056893636.mount: Deactivated successfully. Sep 13 00:50:58.578269 env[1756]: time="2025-09-13T00:50:58.575818905Z" level=info msg="CreateContainer within sandbox \"36f64575cf6c688891479dfaf650bd4c3065bea78f983ba1e8d79435436b949f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7613ef6a4274df2d3799e9e60effe12c74dd8217680c8c1c0f76a76cfb3a262e\"" Sep 13 00:50:58.648264 env[1756]: time="2025-09-13T00:50:58.646900171Z" level=info msg="StartContainer for \"7613ef6a4274df2d3799e9e60effe12c74dd8217680c8c1c0f76a76cfb3a262e\"" Sep 13 00:50:58.648264 env[1756]: time="2025-09-13T00:50:58.646921355Z" level=info msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" Sep 13 00:50:58.824593 env[1756]: time="2025-09-13T00:50:58.823604669Z" level=info msg="StartContainer for \"7613ef6a4274df2d3799e9e60effe12c74dd8217680c8c1c0f76a76cfb3a262e\" returns successfully" Sep 13 00:50:58.870066 systemd[1]: run-containerd-runc-k8s.io-7613ef6a4274df2d3799e9e60effe12c74dd8217680c8c1c0f76a76cfb3a262e-runc.168kYJ.mount: Deactivated successfully. Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.803 [WARNING][5899] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.804 [INFO][5899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.804 [INFO][5899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" iface="eth0" netns="" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.804 [INFO][5899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.804 [INFO][5899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.879 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.880 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.880 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.887 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.887 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.889 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:58.898718 env[1756]: 2025-09-13 00:50:58.894 [INFO][5899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:58.901481 env[1756]: time="2025-09-13T00:50:58.898767594Z" level=info msg="TearDown network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" successfully" Sep 13 00:50:58.901481 env[1756]: time="2025-09-13T00:50:58.898802521Z" level=info msg="StopPodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" returns successfully" Sep 13 00:50:59.068905 env[1756]: time="2025-09-13T00:50:59.067846494Z" level=info msg="RemovePodSandbox for \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" Sep 13 00:50:59.068905 env[1756]: time="2025-09-13T00:50:59.067916335Z" level=info msg="Forcibly stopping sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\"" Sep 13 00:50:59.447123 sshd[5793]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:59.502221 kernel: kauditd_printk_skb: 9 callbacks suppressed Sep 13 00:50:59.538053 kernel: audit: type=1130 audit(1757724659.491:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.243:22-147.75.109.163:32986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.544042 kernel: audit: type=1106 audit(1757724659.501:461): pid=5793 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.544153 kernel: audit: type=1104 audit(1757724659.520:462): pid=5793 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.243:22-147.75.109.163:32986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.590365 kernel: audit: type=1131 audit(1757724659.579:463): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.243:22-147.75.109.163:32978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.501000 audit[5793]: USER_END pid=5793 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.520000 audit[5793]: CRED_DISP pid=5793 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.243:22-147.75.109.163:32978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.483700 systemd[1]: Started sshd@11-172.31.30.243:22-147.75.109.163:32986.service. Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.305 [WARNING][5946] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" WorkloadEndpoint="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.306 [INFO][5946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.306 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" iface="eth0" netns="" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.306 [INFO][5946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.306 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.422 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.422 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.422 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.444 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.444 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" HandleID="k8s-pod-network.c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Workload="ip--172--31--30--243-k8s-whisker--649cf9f94c--x7w52-eth0" Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.446 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:50:59.608505 env[1756]: 2025-09-13 00:50:59.455 [INFO][5946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474" Sep 13 00:50:59.608505 env[1756]: time="2025-09-13T00:50:59.457738119Z" level=info msg="TearDown network for sandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" successfully" Sep 13 00:50:59.608505 env[1756]: time="2025-09-13T00:50:59.501404156Z" level=info msg="RemovePodSandbox \"c5faa6b9a6f01ceaefcc1740b641eca45d7b48b7197f839f7cf141503fd3e474\" returns successfully" Sep 13 00:50:59.580521 systemd[1]: sshd@10-172.31.30.243:22-147.75.109.163:32978.service: Deactivated successfully. Sep 13 00:50:59.596838 systemd-logind[1741]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:50:59.596858 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:50:59.616040 systemd-logind[1741]: Removed session 11. Sep 13 00:50:59.634000 audit[5964]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=5964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:59.678450 kernel: audit: type=1325 audit(1757724659.634:464): table=filter:124 family=2 entries=12 op=nft_register_rule pid=5964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:59.678566 kernel: audit: type=1300 audit(1757724659.634:464): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffb6420b60 a2=0 a3=7fffb6420b4c items=0 ppid=2797 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:59.678609 kernel: audit: type=1327 audit(1757724659.634:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:59.678643 kernel: audit: type=1325 audit(1757724659.659:465): table=nat:125 family=2 entries=22 op=nft_register_rule pid=5964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:59.678688 kernel: audit: type=1300 audit(1757724659.659:465): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffb6420b60 a2=0 a3=7fffb6420b4c items=0 ppid=2797 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:59.634000 audit[5964]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffb6420b60 a2=0 a3=7fffb6420b4c items=0 ppid=2797 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:59.690587 kernel: audit: type=1327 audit(1757724659.659:465): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:59.634000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:59.659000 audit[5964]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:50:59.659000 audit[5964]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffb6420b60 a2=0 a3=7fffb6420b4c items=0 ppid=2797 pid=5964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:59.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:50:59.993000 audit[5961]: USER_ACCT pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.995000 audit[5961]: CRED_ACQ pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:50:59.995000 audit[5961]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5a988700 a2=3 a3=0 items=0 ppid=1 pid=5961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:59.995000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:00.009941 sshd[5961]: Accepted publickey for core from 147.75.109.163 port 32986 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:00.005985 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:00.074428 systemd[1]: Started session-12.scope. Sep 13 00:51:00.075677 systemd-logind[1741]: New session 12 of user core. Sep 13 00:51:00.136000 audit[5961]: USER_START pid=5961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:00.142000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:00.227706 env[1756]: time="2025-09-13T00:51:00.227483629Z" level=info msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.494 [WARNING][5980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0", GenerateName:"calico-kube-controllers-5775d679df-", Namespace:"calico-system", SelfLink:"", UID:"bab504dd-aec7-4945-b513-319b96cc26d8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5775d679df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9", Pod:"calico-kube-controllers-5775d679df-x29l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08f62cf9d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.500 [INFO][5980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.500 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" iface="eth0" netns="" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.500 [INFO][5980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.500 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.606 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.608 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.608 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.621 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.621 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.624 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:51:00.637464 env[1756]: 2025-09-13 00:51:00.629 [INFO][5980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:00.640226 env[1756]: time="2025-09-13T00:51:00.637725248Z" level=info msg="TearDown network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" successfully" Sep 13 00:51:00.640226 env[1756]: time="2025-09-13T00:51:00.637777246Z" level=info msg="StopPodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" returns successfully" Sep 13 00:51:00.890594 kubelet[2691]: I0913 00:51:00.818233 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-p4282" podStartSLOduration=35.634697302 podStartE2EDuration="49.269292052s" podCreationTimestamp="2025-09-13 00:50:11 +0000 UTC" firstStartedPulling="2025-09-13 00:50:40.713617864 +0000 UTC m=+49.021246799" lastFinishedPulling="2025-09-13 00:50:54.348212595 +0000 UTC m=+62.655841549" observedRunningTime="2025-09-13 00:50:59.555453952 +0000 UTC m=+67.863082904" watchObservedRunningTime="2025-09-13 00:51:00.269292052 +0000 UTC m=+68.576921004" Sep 13 00:51:00.927479 env[1756]: time="2025-09-13T00:51:00.927422781Z" level=info msg="RemovePodSandbox for \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" Sep 13 00:51:00.927666 env[1756]: time="2025-09-13T00:51:00.927487383Z" level=info msg="Forcibly stopping sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\"" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.077 [WARNING][6008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0", GenerateName:"calico-kube-controllers-5775d679df-", Namespace:"calico-system", SelfLink:"", UID:"bab504dd-aec7-4945-b513-319b96cc26d8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5775d679df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-243", ContainerID:"51af3c6c870844368830dc3995e19b9e23d7cbd5859ed2075228eb15f01524e9", Pod:"calico-kube-controllers-5775d679df-x29l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08f62cf9d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.078 [INFO][6008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.078 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" iface="eth0" netns="" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.078 [INFO][6008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.078 [INFO][6008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.143 [INFO][6015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.148 [INFO][6015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.149 [INFO][6015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.162 [WARNING][6015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.162 [INFO][6015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" HandleID="k8s-pod-network.d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Workload="ip--172--31--30--243-k8s-calico--kube--controllers--5775d679df--x29l7-eth0" Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.165 [INFO][6015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:51:01.181803 env[1756]: 2025-09-13 00:51:01.177 [INFO][6008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1" Sep 13 00:51:01.186534 env[1756]: time="2025-09-13T00:51:01.184028935Z" level=info msg="TearDown network for sandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" successfully" Sep 13 00:51:01.211462 env[1756]: time="2025-09-13T00:51:01.211371688Z" level=info msg="RemovePodSandbox \"d2da403755174c430305868a16f6dbf305587f8f9562815948b4b2eae48e2ad1\" returns successfully" Sep 13 00:51:01.713363 kubelet[2691]: I0913 00:51:01.711582 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dbdlb" podStartSLOduration=30.890975751 podStartE2EDuration="47.71154358s" podCreationTimestamp="2025-09-13 00:50:14 +0000 UTC" firstStartedPulling="2025-09-13 00:50:40.205009127 +0000 UTC m=+48.512638064" lastFinishedPulling="2025-09-13 00:50:57.025576947 +0000 UTC m=+65.333205893" observedRunningTime="2025-09-13 00:51:01.696372726 +0000 UTC m=+70.004001681" watchObservedRunningTime="2025-09-13 00:51:01.71154358 +0000 UTC m=+70.019172534" Sep 13 00:51:04.284569 kubelet[2691]: E0913 00:51:04.284528 2691 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.089s" Sep 13 00:51:04.782000 audit[6045]: NETFILTER_CFG table=filter:126 family=2 entries=11 op=nft_register_rule pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:04.813954 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:51:04.818214 kernel: audit: type=1325 audit(1757724664.782:471): table=filter:126 family=2 entries=11 op=nft_register_rule pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:04.818365 kernel: audit: type=1300 audit(1757724664.782:471): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffa06b9090 a2=0 a3=7fffa06b907c items=0 ppid=2797 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:04.818415 kernel: audit: type=1327 audit(1757724664.782:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:04.782000 audit[6045]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffa06b9090 a2=0 a3=7fffa06b907c items=0 ppid=2797 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:04.847754 kernel: audit: type=1325 audit(1757724664.814:472): table=nat:127 family=2 entries=29 op=nft_register_chain pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:04.847951 kernel: audit: type=1300 audit(1757724664.814:472): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fffa06b9090 a2=0 a3=7fffa06b907c items=0 ppid=2797 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:04.848002 kernel: audit: type=1327 audit(1757724664.814:472): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:04.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:04.814000 audit[6045]: NETFILTER_CFG table=nat:127 family=2 entries=29 op=nft_register_chain pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:04.814000 audit[6045]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fffa06b9090 a2=0 a3=7fffa06b907c items=0 ppid=2797 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:04.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:05.118858 kubelet[2691]: E0913 00:51:05.044439 2691 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock" failed. No retries permitted until 2025-09-13 00:51:05.526824518 +0000 UTC m=+73.834453459 (durationBeforeRetry 500ms). Error: RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock, err: rpc error: code = DeadlineExceeded desc = context deadline exceeded Sep 13 00:51:06.473478 kubelet[2691]: E0913 00:51:06.469959 2691 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.167s" Sep 13 00:51:06.590915 sshd[5961]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:06.655993 kernel: audit: type=1106 audit(1757724666.644:473): pid=5961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:06.644000 audit[5961]: USER_END pid=5961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:06.672760 kernel: audit: type=1104 audit(1757724666.653:474): pid=5961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:06.653000 audit[5961]: CRED_DISP pid=5961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:06.685176 systemd[1]: sshd@11-172.31.30.243:22-147.75.109.163:32986.service: Deactivated successfully. Sep 13 00:51:06.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.243:22-147.75.109.163:32986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:06.698090 kernel: audit: type=1131 audit(1757724666.686:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.243:22-147.75.109.163:32986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:06.701091 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:51:06.701631 systemd-logind[1741]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:51:06.717417 systemd-logind[1741]: Removed session 12. Sep 13 00:51:06.903141 kubelet[2691]: I0913 00:51:06.889495 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcbb86cdd-8mn66" podStartSLOduration=41.035256691 podStartE2EDuration="55.875633331s" podCreationTimestamp="2025-09-13 00:50:11 +0000 UTC" firstStartedPulling="2025-09-13 00:50:43.219695681 +0000 UTC m=+51.527324614" lastFinishedPulling="2025-09-13 00:50:58.060072322 +0000 UTC m=+66.367701254" observedRunningTime="2025-09-13 00:51:06.864084582 +0000 UTC m=+75.171713535" watchObservedRunningTime="2025-09-13 00:51:06.875633331 +0000 UTC m=+75.183262285" Sep 13 00:51:06.944558 kubelet[2691]: I0913 00:51:06.944517 2691 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:51:06.944558 kubelet[2691]: I0913 00:51:06.944569 2691 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:51:07.058000 audit[6050]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=6050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:07.074262 kernel: audit: type=1325 audit(1757724667.058:476): table=filter:128 family=2 entries=10 op=nft_register_rule pid=6050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:07.058000 audit[6050]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd00ca34b0 a2=0 a3=7ffd00ca349c items=0 ppid=2797 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:07.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:07.070000 audit[6050]: NETFILTER_CFG table=nat:129 family=2 entries=32 op=nft_register_rule pid=6050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:07.070000 audit[6050]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd00ca34b0 a2=0 a3=7ffd00ca349c items=0 ppid=2797 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:07.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:08.057000 audit[6052]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:08.057000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe2fe3c9e0 a2=0 a3=7ffe2fe3c9cc items=0 ppid=2797 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:08.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:08.065000 audit[6052]: NETFILTER_CFG table=nat:131 family=2 entries=36 op=nft_register_chain pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:08.065000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffe2fe3c9e0 a2=0 a3=7ffe2fe3c9cc items=0 ppid=2797 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:08.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:11.619313 kernel: kauditd_printk_skb: 11 callbacks suppressed Sep 13 00:51:11.626823 kernel: audit: type=1130 audit(1757724671.606:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.243:22-147.75.109.163:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.243:22-147.75.109.163:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.607324 systemd[1]: Started sshd@12-172.31.30.243:22-147.75.109.163:38346.service. Sep 13 00:51:11.891000 audit[6053]: USER_ACCT pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.905078 kernel: audit: type=1101 audit(1757724671.891:481): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.905533 sshd[6053]: Accepted publickey for core from 147.75.109.163 port 38346 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:11.904000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.912790 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:11.916216 kernel: audit: type=1103 audit(1757724671.904:482): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.923097 kernel: audit: type=1006 audit(1757724671.904:483): pid=6053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 00:51:11.904000 audit[6053]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe35eedb0 a2=3 a3=0 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:11.904000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:11.935471 kernel: audit: type=1300 audit(1757724671.904:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe35eedb0 a2=3 a3=0 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:11.935848 kernel: audit: type=1327 audit(1757724671.904:483): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:11.964057 systemd-logind[1741]: New session 13 of user core. Sep 13 00:51:11.964813 systemd[1]: Started session-13.scope. Sep 13 00:51:11.971000 audit[6053]: USER_START pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.983773 kernel: audit: type=1105 audit(1757724671.971:484): pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.983990 kernel: audit: type=1103 audit(1757724671.982:485): pid=6056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:11.982000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.149996 sshd[6053]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:13.152000 audit[6053]: USER_END pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.172682 kernel: audit: type=1106 audit(1757724673.152:486): pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.175509 kernel: audit: type=1104 audit(1757724673.152:487): pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.152000 audit[6053]: CRED_DISP pid=6053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.243:22-147.75.109.163:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:13.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.243:22-147.75.109.163:38350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:13.155920 systemd[1]: sshd@12-172.31.30.243:22-147.75.109.163:38346.service: Deactivated successfully. Sep 13 00:51:13.157194 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:51:13.160080 systemd-logind[1741]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:51:13.161333 systemd-logind[1741]: Removed session 13. Sep 13 00:51:13.174573 systemd[1]: Started sshd@13-172.31.30.243:22-147.75.109.163:38350.service. Sep 13 00:51:13.351000 audit[6065]: USER_ACCT pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.352609 sshd[6065]: Accepted publickey for core from 147.75.109.163 port 38350 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:13.353000 audit[6065]: CRED_ACQ pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.353000 audit[6065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc19b47e30 a2=3 a3=0 items=0 ppid=1 pid=6065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:13.353000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:13.354450 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:13.361450 systemd-logind[1741]: New session 14 of user core. Sep 13 00:51:13.363164 systemd[1]: Started session-14.scope. Sep 13 00:51:13.369000 audit[6065]: USER_START pid=6065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:13.371000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.300359 sshd[6065]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:14.309000 audit[6065]: USER_END pid=6065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.311000 audit[6065]: CRED_DISP pid=6065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.243:22-147.75.109.163:38352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.243:22-147.75.109.163:38350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.319444 systemd[1]: Started sshd@14-172.31.30.243:22-147.75.109.163:38352.service. Sep 13 00:51:14.320335 systemd[1]: sshd@13-172.31.30.243:22-147.75.109.163:38350.service: Deactivated successfully. Sep 13 00:51:14.321836 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:51:14.330542 systemd-logind[1741]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:51:14.333638 systemd-logind[1741]: Removed session 14. Sep 13 00:51:14.521000 audit[6076]: USER_ACCT pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.522924 sshd[6076]: Accepted publickey for core from 147.75.109.163 port 38352 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:14.523000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.523000 audit[6076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc96884490 a2=3 a3=0 items=0 ppid=1 pid=6076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:14.523000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:14.527684 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:14.537176 systemd[1]: Started session-15.scope. Sep 13 00:51:14.538150 systemd-logind[1741]: New session 15 of user core. Sep 13 00:51:14.544000 audit[6076]: USER_START pid=6076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:14.546000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:23.830000 audit[6098]: NETFILTER_CFG table=filter:132 family=2 entries=22 op=nft_register_rule pid=6098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:24.167646 kernel: kauditd_printk_skb: 20 callbacks suppressed Sep 13 00:51:24.173967 kernel: audit: type=1325 audit(1757724683.830:504): table=filter:132 family=2 entries=22 op=nft_register_rule pid=6098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:24.179381 kernel: audit: type=1300 audit(1757724683.830:504): arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffc8fb04bc0 a2=0 a3=7ffc8fb04bac items=0 ppid=2797 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:24.179485 kernel: audit: type=1327 audit(1757724683.830:504): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:24.179515 kernel: audit: type=1325 audit(1757724683.879:505): table=nat:133 family=2 entries=24 op=nft_register_rule pid=6098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:24.179545 kernel: audit: type=1300 audit(1757724683.879:505): arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc8fb04bc0 a2=0 a3=0 items=0 ppid=2797 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:24.179568 kernel: audit: type=1327 audit(1757724683.879:505): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:23.830000 audit[6098]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffc8fb04bc0 a2=0 a3=7ffc8fb04bac items=0 ppid=2797 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:23.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:23.879000 audit[6098]: NETFILTER_CFG table=nat:133 family=2 entries=24 op=nft_register_rule pid=6098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:23.879000 audit[6098]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc8fb04bc0 a2=0 a3=0 items=0 ppid=2797 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:23.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:28.671000 audit[6103]: NETFILTER_CFG table=filter:134 family=2 entries=34 op=nft_register_rule pid=6103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:28.946399 kernel: audit: type=1325 audit(1757724688.671:506): table=filter:134 family=2 entries=34 op=nft_register_rule pid=6103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:28.965520 kernel: audit: type=1300 audit(1757724688.671:506): arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffea49084a0 a2=0 a3=7ffea490848c items=0 ppid=2797 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:28.965782 kernel: audit: type=1327 audit(1757724688.671:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:28.965815 kernel: audit: type=1325 audit(1757724688.717:507): table=nat:135 family=2 entries=24 op=nft_register_rule pid=6103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:28.671000 audit[6103]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffea49084a0 a2=0 a3=7ffea490848c items=0 ppid=2797 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:28.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:28.717000 audit[6103]: NETFILTER_CFG table=nat:135 family=2 entries=24 op=nft_register_rule pid=6103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:51:28.717000 audit[6103]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffea49084a0 a2=0 a3=0 items=0 ppid=2797 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:28.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:51:30.482455 env[1756]: time="2025-09-13T00:51:30.482394976Z" level=info msg="shim disconnected" id=e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72 Sep 13 00:51:30.482455 env[1756]: time="2025-09-13T00:51:30.482451120Z" level=warning msg="cleaning up after shim disconnected" id=e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72 namespace=k8s.io Sep 13 00:51:30.482455 env[1756]: time="2025-09-13T00:51:30.482462713Z" level=info msg="cleaning up dead shim" Sep 13 00:51:30.482455 env[1756]: time="2025-09-13T00:51:30.493190131Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6119 runtime=io.containerd.runc.v2\n" Sep 13 00:51:30.627839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72-rootfs.mount: Deactivated successfully. Sep 13 00:51:31.823851 sshd[6076]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:31.891212 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 13 00:51:31.896811 kernel: audit: type=1130 audit(1757724691.837:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.243:22-147.75.109.163:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:31.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.243:22-147.75.109.163:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:31.838010 systemd[1]: Started sshd@15-172.31.30.243:22-147.75.109.163:51458.service. Sep 13 00:51:31.943000 audit[6076]: USER_END pid=6076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:31.955748 kernel: audit: type=1106 audit(1757724691.943:509): pid=6076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:31.957458 systemd[1]: sshd@14-172.31.30.243:22-147.75.109.163:38352.service: Deactivated successfully. Sep 13 00:51:31.962410 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:51:31.964615 systemd-logind[1741]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:51:31.952000 audit[6076]: CRED_DISP pid=6076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:31.976746 kernel: audit: type=1104 audit(1757724691.952:510): pid=6076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:31.987477 kernel: audit: type=1131 audit(1757724691.956:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.243:22-147.75.109.163:38352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:31.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.243:22-147.75.109.163:38352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:31.975106 systemd-logind[1741]: Removed session 15. Sep 13 00:51:32.208000 audit[6131]: USER_ACCT pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.217808 sshd[6131]: Accepted publickey for core from 147.75.109.163 port 51458 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:32.220216 kernel: audit: type=1101 audit(1757724692.208:512): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.220313 kernel: audit: type=1103 audit(1757724692.216:513): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.216000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.229200 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:32.238894 kernel: audit: type=1006 audit(1757724692.216:514): pid=6131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:51:32.243753 systemd[1]: Started session-16.scope. Sep 13 00:51:32.244275 systemd-logind[1741]: New session 16 of user core. Sep 13 00:51:32.216000 audit[6131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5aa3ef90 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:32.262799 kernel: audit: type=1300 audit(1757724692.216:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5aa3ef90 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:32.262903 kernel: audit: type=1327 audit(1757724692.216:514): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:32.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:32.270000 audit[6131]: USER_START pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.283914 kernel: audit: type=1105 audit(1757724692.270:515): pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:32.283000 audit[6136]: CRED_ACQ pid=6136 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:35.794190 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.YgWSuL.mount: Deactivated successfully. Sep 13 00:51:38.955635 sshd[6131]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:39.034158 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:51:39.035209 kernel: audit: type=1106 audit(1757724699.002:517): pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.037321 kernel: audit: type=1104 audit(1757724699.016:518): pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.002000 audit[6131]: USER_END pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.050322 kernel: audit: type=1130 audit(1757724699.038:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.243:22-147.75.109.163:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:39.016000 audit[6131]: CRED_DISP pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.243:22-147.75.109.163:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:39.038838 systemd[1]: Started sshd@16-172.31.30.243:22-147.75.109.163:51460.service. Sep 13 00:51:39.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.243:22-147.75.109.163:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:39.049655 systemd[1]: sshd@15-172.31.30.243:22-147.75.109.163:51458.service: Deactivated successfully. Sep 13 00:51:39.052401 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:51:39.052962 systemd-logind[1741]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:51:39.062716 kernel: audit: type=1131 audit(1757724699.049:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.243:22-147.75.109.163:51458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:39.062737 systemd-logind[1741]: Removed session 16. Sep 13 00:51:39.283000 audit[6221]: USER_ACCT pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.294567 kernel: audit: type=1101 audit(1757724699.283:521): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.294672 sshd[6221]: Accepted publickey for core from 147.75.109.163 port 51460 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:39.293000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.301789 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:39.313975 kernel: audit: type=1103 audit(1757724699.293:522): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.315020 kernel: audit: type=1006 audit(1757724699.293:523): pid=6221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 00:51:39.315096 kernel: audit: type=1300 audit(1757724699.293:523): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff07719f50 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:39.293000 audit[6221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff07719f50 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:39.293000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:39.328961 kernel: audit: type=1327 audit(1757724699.293:523): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:39.328738 systemd[1]: Started session-17.scope. Sep 13 00:51:39.329991 systemd-logind[1741]: New session 17 of user core. Sep 13 00:51:39.337000 audit[6221]: USER_START pid=6221 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.337000 audit[6226]: CRED_ACQ pid=6226 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.348904 kernel: audit: type=1105 audit(1757724699.337:524): pid=6221 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:39.404147 kubelet[2691]: E0913 00:51:39.404099 2691 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.341s" Sep 13 00:51:39.684321 kubelet[2691]: I0913 00:51:39.684287 2691 scope.go:117] "RemoveContainer" containerID="e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72" Sep 13 00:51:40.133204 env[1756]: time="2025-09-13T00:51:40.133078415Z" level=info msg="CreateContainer within sandbox \"13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:51:40.213948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194985095.mount: Deactivated successfully. Sep 13 00:51:40.223677 env[1756]: time="2025-09-13T00:51:40.223619720Z" level=info msg="CreateContainer within sandbox \"13f095153e18192a365eb77d20210db9bf5a4e86c162c39d7b53a0fe4f8f9a26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600\"" Sep 13 00:51:40.229664 env[1756]: time="2025-09-13T00:51:40.229604818Z" level=info msg="StartContainer for \"a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600\"" Sep 13 00:51:40.375467 env[1756]: time="2025-09-13T00:51:40.373357478Z" level=info msg="StartContainer for \"a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600\" returns successfully" Sep 13 00:51:41.592778 sshd[6221]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:41.649000 audit[6221]: USER_END pid=6221 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:41.651000 audit[6221]: CRED_DISP pid=6221 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:41.675298 systemd[1]: sshd@16-172.31.30.243:22-147.75.109.163:51460.service: Deactivated successfully. Sep 13 00:51:41.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.243:22-147.75.109.163:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:41.679978 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:51:41.680456 systemd-logind[1741]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:51:41.693176 systemd-logind[1741]: Removed session 17. Sep 13 00:51:46.623349 systemd[1]: Started sshd@17-172.31.30.243:22-147.75.109.163:35226.service. Sep 13 00:51:46.649284 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:51:46.651930 kernel: audit: type=1130 audit(1757724706.623:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.243:22-147.75.109.163:35226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:46.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.243:22-147.75.109.163:35226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:46.885000 audit[6290]: USER_ACCT pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.919476 kernel: audit: type=1101 audit(1757724706.885:530): pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.919586 kernel: audit: type=1103 audit(1757724706.894:531): pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.920277 kernel: audit: type=1006 audit(1757724706.894:532): pid=6290 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 13 00:51:46.921274 kernel: audit: type=1300 audit(1757724706.894:532): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd94dbaa30 a2=3 a3=0 items=0 ppid=1 pid=6290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:46.921325 kernel: audit: type=1327 audit(1757724706.894:532): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:46.894000 audit[6290]: CRED_ACQ pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.894000 audit[6290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd94dbaa30 a2=3 a3=0 items=0 ppid=1 pid=6290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:46.894000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:46.929772 sshd[6290]: Accepted publickey for core from 147.75.109.163 port 35226 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:46.899987 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:46.943846 systemd-logind[1741]: New session 18 of user core. Sep 13 00:51:46.945704 systemd[1]: Started session-18.scope. Sep 13 00:51:46.953000 audit[6290]: USER_START pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.957000 audit[6293]: CRED_ACQ pid=6293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.974344 kernel: audit: type=1105 audit(1757724706.953:533): pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:46.975608 kernel: audit: type=1103 audit(1757724706.957:534): pid=6293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:48.114727 sshd[6290]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:48.117000 audit[6290]: USER_END pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:48.127629 kernel: audit: type=1106 audit(1757724708.117:535): pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:48.118000 audit[6290]: CRED_DISP pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:48.136049 kernel: audit: type=1104 audit(1757724708.118:536): pid=6290 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:48.141103 systemd-logind[1741]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:51:48.142959 systemd[1]: sshd@17-172.31.30.243:22-147.75.109.163:35226.service: Deactivated successfully. Sep 13 00:51:48.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.243:22-147.75.109.163:35226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:48.143969 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:51:48.145072 systemd-logind[1741]: Removed session 18. Sep 13 00:51:53.142478 systemd[1]: Started sshd@18-172.31.30.243:22-147.75.109.163:56968.service. Sep 13 00:51:53.149620 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:51:53.150359 kernel: audit: type=1130 audit(1757724713.143:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.243:22-147.75.109.163:56968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:53.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.243:22-147.75.109.163:56968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:53.414000 audit[6306]: USER_ACCT pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.441014 kernel: audit: type=1101 audit(1757724713.414:539): pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.441111 kernel: audit: type=1103 audit(1757724713.427:540): pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.441156 kernel: audit: type=1006 audit(1757724713.427:541): pid=6306 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 13 00:51:53.427000 audit[6306]: CRED_ACQ pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.441378 sshd[6306]: Accepted publickey for core from 147.75.109.163 port 56968 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:53.457139 kernel: audit: type=1300 audit(1757724713.427:541): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc89a49dd0 a2=3 a3=0 items=0 ppid=1 pid=6306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:53.457229 kernel: audit: type=1327 audit(1757724713.427:541): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:53.427000 audit[6306]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc89a49dd0 a2=3 a3=0 items=0 ppid=1 pid=6306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:53.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:53.434514 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:53.465000 audit[6306]: USER_START pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.455838 systemd-logind[1741]: New session 19 of user core. Sep 13 00:51:53.458500 systemd[1]: Started session-19.scope. Sep 13 00:51:53.475937 kernel: audit: type=1105 audit(1757724713.465:542): pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.477058 kernel: audit: type=1103 audit(1757724713.474:543): pid=6309 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:53.474000 audit[6309]: CRED_ACQ pid=6309 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:54.349817 sshd[6306]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:54.350000 audit[6306]: USER_END pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:54.360924 kernel: audit: type=1106 audit(1757724714.350:544): pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:54.362280 kernel: audit: type=1104 audit(1757724714.350:545): pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:54.350000 audit[6306]: CRED_DISP pid=6306 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:54.364289 systemd-logind[1741]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:51:54.366557 systemd[1]: sshd@18-172.31.30.243:22-147.75.109.163:56968.service: Deactivated successfully. Sep 13 00:51:54.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.243:22-147.75.109.163:56968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:54.367708 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:51:54.370018 systemd-logind[1741]: Removed session 19. Sep 13 00:51:55.141802 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.TkE3zd.mount: Deactivated successfully. Sep 13 00:51:59.411667 systemd[1]: Started sshd@19-172.31.30.243:22-147.75.109.163:56976.service. Sep 13 00:51:59.447895 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:51:59.450114 kernel: audit: type=1130 audit(1757724719.414:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.243:22-147.75.109.163:56976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:59.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.243:22-147.75.109.163:56976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:59.743000 audit[6367]: USER_ACCT pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.754157 kernel: audit: type=1101 audit(1757724719.743:548): pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.754264 sshd[6367]: Accepted publickey for core from 147.75.109.163 port 56976 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:51:59.754000 audit[6367]: CRED_ACQ pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.758925 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:59.767762 kernel: audit: type=1103 audit(1757724719.754:549): pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.769180 kernel: audit: type=1006 audit(1757724719.754:550): pid=6367 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Sep 13 00:51:59.769331 kernel: audit: type=1300 audit(1757724719.754:550): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde1fc12b0 a2=3 a3=0 items=0 ppid=1 pid=6367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:59.754000 audit[6367]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde1fc12b0 a2=3 a3=0 items=0 ppid=1 pid=6367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:59.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:59.780657 kernel: audit: type=1327 audit(1757724719.754:550): proctitle=737368643A20636F7265205B707269765D Sep 13 00:51:59.799144 systemd-logind[1741]: New session 20 of user core. Sep 13 00:51:59.802135 systemd[1]: Started session-20.scope. Sep 13 00:51:59.809000 audit[6367]: USER_START pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.822240 kernel: audit: type=1105 audit(1757724719.809:551): pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.820000 audit[6370]: CRED_ACQ pid=6370 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:51:59.831940 kernel: audit: type=1103 audit(1757724719.820:552): pid=6370 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:02.847000 audit[6383]: NETFILTER_CFG table=filter:136 family=2 entries=33 op=nft_register_rule pid=6383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:03.049266 kernel: audit: type=1325 audit(1757724722.847:553): table=filter:136 family=2 entries=33 op=nft_register_rule pid=6383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:03.068279 kernel: audit: type=1300 audit(1757724722.847:553): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffca4ae7a60 a2=0 a3=7ffca4ae7a4c items=0 ppid=2797 pid=6383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:02.847000 audit[6383]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffca4ae7a60 a2=0 a3=7ffca4ae7a4c items=0 ppid=2797 pid=6383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:02.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:02.938000 audit[6383]: NETFILTER_CFG table=nat:137 family=2 entries=31 op=nft_register_chain pid=6383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:02.938000 audit[6383]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffca4ae7a60 a2=0 a3=7ffca4ae7a4c items=0 ppid=2797 pid=6383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:02.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:07.646881 sshd[6367]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:07.759765 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:52:07.763526 kernel: audit: type=1106 audit(1757724727.726:555): pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:07.763608 kernel: audit: type=1104 audit(1757724727.736:556): pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:07.726000 audit[6367]: USER_END pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:07.736000 audit[6367]: CRED_DISP pid=6367 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:07.778237 systemd[1]: sshd@19-172.31.30.243:22-147.75.109.163:56976.service: Deactivated successfully. Sep 13 00:52:07.784203 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:52:07.784532 systemd-logind[1741]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:52:07.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.243:22-147.75.109.163:56976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.799913 kernel: audit: type=1131 audit(1757724727.779:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.243:22-147.75.109.163:56976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.821639 systemd-logind[1741]: Removed session 20. Sep 13 00:52:09.846000 audit[6419]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=6419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:09.918384 kernel: audit: type=1325 audit(1757724729.846:558): table=filter:138 family=2 entries=20 op=nft_register_rule pid=6419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:09.922779 kernel: audit: type=1300 audit(1757724729.846:558): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffed3e54f70 a2=0 a3=7ffed3e54f5c items=0 ppid=2797 pid=6419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:09.922856 kernel: audit: type=1327 audit(1757724729.846:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:09.922932 kernel: audit: type=1325 audit(1757724729.874:559): table=nat:139 family=2 entries=110 op=nft_register_chain pid=6419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:09.922984 kernel: audit: type=1300 audit(1757724729.874:559): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffed3e54f70 a2=0 a3=7ffed3e54f5c items=0 ppid=2797 pid=6419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:09.923913 kernel: audit: type=1327 audit(1757724729.874:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:09.846000 audit[6419]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffed3e54f70 a2=0 a3=7ffed3e54f5c items=0 ppid=2797 pid=6419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:09.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:09.874000 audit[6419]: NETFILTER_CFG table=nat:139 family=2 entries=110 op=nft_register_chain pid=6419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:52:09.874000 audit[6419]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffed3e54f70 a2=0 a3=7ffed3e54f5c items=0 ppid=2797 pid=6419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:09.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:52:12.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.243:22-147.75.109.163:34812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:12.749792 kernel: audit: type=1130 audit(1757724732.716:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.243:22-147.75.109.163:34812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:12.713176 systemd[1]: Started sshd@20-172.31.30.243:22-147.75.109.163:34812.service. Sep 13 00:52:13.040000 audit[6442]: USER_ACCT pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.051239 kernel: audit: type=1101 audit(1757724733.040:561): pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.051369 sshd[6442]: Accepted publickey for core from 147.75.109.163 port 34812 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:52:13.051000 audit[6442]: CRED_ACQ pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.066712 kernel: audit: type=1103 audit(1757724733.051:562): pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.058422 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:13.085898 kernel: audit: type=1006 audit(1757724733.051:563): pid=6442 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 13 00:52:13.051000 audit[6442]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6b4960d0 a2=3 a3=0 items=0 ppid=1 pid=6442 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:13.104309 kernel: audit: type=1300 audit(1757724733.051:563): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6b4960d0 a2=3 a3=0 items=0 ppid=1 pid=6442 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:13.104398 kernel: audit: type=1327 audit(1757724733.051:563): proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:13.051000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:13.103678 systemd[1]: Started session-21.scope. Sep 13 00:52:13.106800 systemd-logind[1741]: New session 21 of user core. Sep 13 00:52:13.134000 audit[6442]: USER_START pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.148963 kernel: audit: type=1105 audit(1757724733.134:564): pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.137000 audit[6445]: CRED_ACQ pid=6445 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:13.161934 kernel: audit: type=1103 audit(1757724733.137:565): pid=6445 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:14.442425 sshd[6442]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:14.442000 audit[6442]: USER_END pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:14.454917 kernel: audit: type=1106 audit(1757724734.442:566): pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:14.443000 audit[6442]: CRED_DISP pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:14.457779 systemd[1]: sshd@20-172.31.30.243:22-147.75.109.163:34812.service: Deactivated successfully. Sep 13 00:52:14.474085 kernel: audit: type=1104 audit(1757724734.443:567): pid=6442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:14.474199 kernel: audit: type=1131 audit(1757724734.455:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.243:22-147.75.109.163:34812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:14.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.243:22-147.75.109.163:34812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:14.459783 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:52:14.460538 systemd-logind[1741]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:52:14.461717 systemd-logind[1741]: Removed session 21. Sep 13 00:52:19.479245 systemd[1]: Started sshd@21-172.31.30.243:22-147.75.109.163:34828.service. Sep 13 00:52:19.496865 kernel: audit: type=1130 audit(1757724739.479:569): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.243:22-147.75.109.163:34828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.243:22-147.75.109.163:34828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.802866 sshd[6455]: Accepted publickey for core from 147.75.109.163 port 34828 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:52:19.816807 kernel: audit: type=1101 audit(1757724739.800:570): pid=6455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.800000 audit[6455]: USER_ACCT pid=6455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.820107 sshd[6455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:19.814000 audit[6455]: CRED_ACQ pid=6455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.832893 kernel: audit: type=1103 audit(1757724739.814:571): pid=6455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.845433 systemd-logind[1741]: New session 22 of user core. Sep 13 00:52:19.847038 systemd[1]: Started session-22.scope. Sep 13 00:52:19.854922 kernel: audit: type=1006 audit(1757724739.815:572): pid=6455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:52:19.815000 audit[6455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe09418a20 a2=3 a3=0 items=0 ppid=1 pid=6455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:19.881598 kernel: audit: type=1300 audit(1757724739.815:572): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe09418a20 a2=3 a3=0 items=0 ppid=1 pid=6455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:19.815000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:19.909953 kernel: audit: type=1327 audit(1757724739.815:572): proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:19.910071 kernel: audit: type=1105 audit(1757724739.860:573): pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.860000 audit[6455]: USER_START pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.862000 audit[6458]: CRED_ACQ pid=6458 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:19.919922 kernel: audit: type=1103 audit(1757724739.862:574): pid=6458 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:20.768628 sshd[6455]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:20.793526 kernel: audit: type=1106 audit(1757724740.769:575): pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:20.793654 kernel: audit: type=1104 audit(1757724740.769:576): pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:20.769000 audit[6455]: USER_END pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:20.769000 audit[6455]: CRED_DISP pid=6455 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:20.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.243:22-147.75.109.163:34828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.772798 systemd-logind[1741]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:52:20.774596 systemd[1]: sshd@21-172.31.30.243:22-147.75.109.163:34828.service: Deactivated successfully. Sep 13 00:52:20.775781 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:52:20.777426 systemd-logind[1741]: Removed session 22. Sep 13 00:52:25.156301 systemd[1]: run-containerd-runc-k8s.io-4cec1b91f0be62237b5ca462711de24a749225539ad6ee9f6727c39587f3d7bb-runc.vkwGQN.mount: Deactivated successfully. Sep 13 00:52:25.802709 systemd[1]: Started sshd@22-172.31.30.243:22-147.75.109.163:60550.service. Sep 13 00:52:25.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.243:22-147.75.109.163:60550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:25.808516 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:52:25.814405 kernel: audit: type=1130 audit(1757724745.804:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.243:22-147.75.109.163:60550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:26.117000 audit[6507]: USER_ACCT pid=6507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.128397 sshd[6507]: Accepted publickey for core from 147.75.109.163 port 60550 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:52:26.132278 kernel: audit: type=1101 audit(1757724746.117:579): pid=6507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.133128 kernel: audit: type=1103 audit(1757724746.127:580): pid=6507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.127000 audit[6507]: CRED_ACQ pid=6507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.146326 kernel: audit: type=1006 audit(1757724746.127:581): pid=6507 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:52:26.127000 audit[6507]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc236faa60 a2=3 a3=0 items=0 ppid=1 pid=6507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:26.157612 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:26.160713 kernel: audit: type=1300 audit(1757724746.127:581): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc236faa60 a2=3 a3=0 items=0 ppid=1 pid=6507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:26.160809 kernel: audit: type=1327 audit(1757724746.127:581): proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:26.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:26.191096 systemd[1]: Started session-23.scope. Sep 13 00:52:26.191609 systemd-logind[1741]: New session 23 of user core. Sep 13 00:52:26.216000 audit[6507]: USER_START pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.228906 kernel: audit: type=1105 audit(1757724746.216:582): pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.219000 audit[6510]: CRED_ACQ pid=6510 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:26.238041 kernel: audit: type=1103 audit(1757724746.219:583): pid=6510 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:27.194753 sshd[6507]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:27.196000 audit[6507]: USER_END pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:27.209979 kernel: audit: type=1106 audit(1757724747.196:584): pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:27.208404 systemd[1]: sshd@22-172.31.30.243:22-147.75.109.163:60550.service: Deactivated successfully. Sep 13 00:52:27.209679 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:52:27.212028 systemd-logind[1741]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:52:27.213222 systemd-logind[1741]: Removed session 23. Sep 13 00:52:27.196000 audit[6507]: CRED_DISP pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:27.223239 kernel: audit: type=1104 audit(1757724747.196:585): pid=6507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:27.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.243:22-147.75.109.163:60550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:32.230177 systemd[1]: Started sshd@23-172.31.30.243:22-147.75.109.163:42138.service. Sep 13 00:52:32.250000 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:52:32.253010 kernel: audit: type=1130 audit(1757724752.229:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.243:22-147.75.109.163:42138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:32.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.243:22-147.75.109.163:42138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:32.532000 audit[6544]: USER_ACCT pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.554087 kernel: audit: type=1101 audit(1757724752.532:588): pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.554815 kernel: audit: type=1103 audit(1757724752.542:589): pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.542000 audit[6544]: CRED_ACQ pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.555722 sshd[6544]: Accepted publickey for core from 147.75.109.163 port 42138 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:52:32.556353 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:32.542000 audit[6544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e9d9fd0 a2=3 a3=0 items=0 ppid=1 pid=6544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:32.573718 kernel: audit: type=1006 audit(1757724752.542:590): pid=6544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:52:32.573868 kernel: audit: type=1300 audit(1757724752.542:590): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e9d9fd0 a2=3 a3=0 items=0 ppid=1 pid=6544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:32.542000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:32.579457 kernel: audit: type=1327 audit(1757724752.542:590): proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:32.588967 systemd[1]: Started session-24.scope. Sep 13 00:52:32.590593 systemd-logind[1741]: New session 24 of user core. Sep 13 00:52:32.603000 audit[6544]: USER_START pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.614900 kernel: audit: type=1105 audit(1757724752.603:591): pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.615000 audit[6547]: CRED_ACQ pid=6547 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:32.627910 kernel: audit: type=1103 audit(1757724752.615:592): pid=6547 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:33.253840 systemd[1]: run-containerd-runc-k8s.io-478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32-runc.sBKUGn.mount: Deactivated successfully. Sep 13 00:52:33.913944 sshd[6544]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:33.915000 audit[6544]: USER_END pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:33.928908 kernel: audit: type=1106 audit(1757724753.915:593): pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:33.927000 audit[6544]: CRED_DISP pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:33.931711 systemd-logind[1741]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:52:33.932603 systemd[1]: sshd@23-172.31.30.243:22-147.75.109.163:42138.service: Deactivated successfully. Sep 13 00:52:33.935567 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:52:33.938404 systemd-logind[1741]: Removed session 24. Sep 13 00:52:33.941293 kernel: audit: type=1104 audit(1757724753.927:594): pid=6544 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:33.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.243:22-147.75.109.163:42138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.977348 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:52:38.998927 kernel: audit: type=1130 audit(1757724758.964:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.243:22-147.75.109.163:42154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.243:22-147.75.109.163:42154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:38.965565 systemd[1]: Started sshd@24-172.31.30.243:22-147.75.109.163:42154.service. Sep 13 00:52:39.319599 sshd[6581]: Accepted publickey for core from 147.75.109.163 port 42154 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:52:39.342284 kernel: audit: type=1101 audit(1757724759.318:597): pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.343168 kernel: audit: type=1103 audit(1757724759.325:598): pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.318000 audit[6581]: USER_ACCT pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.325000 audit[6581]: CRED_ACQ pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.330116 sshd[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:39.349947 kernel: audit: type=1006 audit(1757724759.325:599): pid=6581 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 00:52:39.325000 audit[6581]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb30733f0 a2=3 a3=0 items=0 ppid=1 pid=6581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:39.363758 kernel: audit: type=1300 audit(1757724759.325:599): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb30733f0 a2=3 a3=0 items=0 ppid=1 pid=6581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:39.363924 kernel: audit: type=1327 audit(1757724759.325:599): proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:39.325000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:52:39.387251 systemd-logind[1741]: New session 25 of user core. Sep 13 00:52:39.390650 systemd[1]: Started session-25.scope. Sep 13 00:52:39.398000 audit[6581]: USER_START pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.409943 kernel: audit: type=1105 audit(1757724759.398:600): pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.398000 audit[6585]: CRED_ACQ pid=6585 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:39.420704 kernel: audit: type=1103 audit(1757724759.398:601): pid=6585 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:41.712224 sshd[6581]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:41.749570 kernel: audit: type=1106 audit(1757724761.732:602): pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:41.751218 kernel: audit: type=1104 audit(1757724761.742:603): pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:41.732000 audit[6581]: USER_END pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:41.742000 audit[6581]: CRED_DISP pid=6581 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:52:41.767559 systemd[1]: sshd@24-172.31.30.243:22-147.75.109.163:42154.service: Deactivated successfully. Sep 13 00:52:41.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.243:22-147.75.109.163:42154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:41.776557 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:52:41.777188 systemd-logind[1741]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:52:41.789232 systemd-logind[1741]: Removed session 25. Sep 13 00:52:55.660432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600-rootfs.mount: Deactivated successfully. Sep 13 00:52:55.711773 env[1756]: time="2025-09-13T00:52:55.668845722Z" level=info msg="shim disconnected" id=a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600 Sep 13 00:52:55.711773 env[1756]: time="2025-09-13T00:52:55.668992717Z" level=warning msg="cleaning up after shim disconnected" id=a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600 namespace=k8s.io Sep 13 00:52:55.711773 env[1756]: time="2025-09-13T00:52:55.669004982Z" level=info msg="cleaning up dead shim" Sep 13 00:52:55.711773 env[1756]: time="2025-09-13T00:52:55.680224268Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6666 runtime=io.containerd.runc.v2\n" Sep 13 00:52:55.953332 kubelet[2691]: I0913 00:52:55.953265 2691 scope.go:117] "RemoveContainer" containerID="e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72" Sep 13 00:52:55.992667 kubelet[2691]: I0913 00:52:55.992624 2691 scope.go:117] "RemoveContainer" containerID="a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600" Sep 13 00:52:56.038371 kubelet[2691]: E0913 00:52:56.037825 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-31-30-243_kube-system(f46d22d2944f8fc600a4e65fcfb61ed6)\"" pod="kube-system/kube-controller-manager-ip-172-31-30-243" podUID="f46d22d2944f8fc600a4e65fcfb61ed6" Sep 13 00:52:56.066914 env[1756]: time="2025-09-13T00:52:56.066706314Z" level=info msg="RemoveContainer for \"e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72\"" Sep 13 00:52:56.076207 env[1756]: time="2025-09-13T00:52:56.076150616Z" level=info msg="RemoveContainer for \"e1613523b6616a759a942e6869856e4a81c2e6cac373ec82743765dff81a6b72\" returns successfully" Sep 13 00:52:56.233928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d-rootfs.mount: Deactivated successfully. Sep 13 00:52:56.239696 env[1756]: time="2025-09-13T00:52:56.239641203Z" level=info msg="shim disconnected" id=afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d Sep 13 00:52:56.239696 env[1756]: time="2025-09-13T00:52:56.239691395Z" level=warning msg="cleaning up after shim disconnected" id=afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d namespace=k8s.io Sep 13 00:52:56.240219 env[1756]: time="2025-09-13T00:52:56.239703435Z" level=info msg="cleaning up dead shim" Sep 13 00:52:56.250353 env[1756]: time="2025-09-13T00:52:56.250309224Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6694 runtime=io.containerd.runc.v2\n" Sep 13 00:52:56.866697 kubelet[2691]: I0913 00:52:56.866663 2691 scope.go:117] "RemoveContainer" containerID="afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d" Sep 13 00:52:56.881940 env[1756]: time="2025-09-13T00:52:56.881896441Z" level=info msg="CreateContainer within sandbox \"bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:52:56.920654 env[1756]: time="2025-09-13T00:52:56.920591220Z" level=info msg="CreateContainer within sandbox \"bee334829a5b06afb49a4d57098d2ac8f694024420c2eb5c14a13c08fadd4723\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6\"" Sep 13 00:52:56.921143 env[1756]: time="2025-09-13T00:52:56.921117458Z" level=info msg="StartContainer for \"2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6\"" Sep 13 00:52:56.991241 env[1756]: time="2025-09-13T00:52:56.991184856Z" level=info msg="StartContainer for \"2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6\" returns successfully" Sep 13 00:53:00.827841 kubelet[2691]: I0913 00:53:00.827722 2691 scope.go:117] "RemoveContainer" containerID="a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600" Sep 13 00:53:00.828661 kubelet[2691]: E0913 00:53:00.828620 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-31-30-243_kube-system(f46d22d2944f8fc600a4e65fcfb61ed6)\"" pod="kube-system/kube-controller-manager-ip-172-31-30-243" podUID="f46d22d2944f8fc600a4e65fcfb61ed6" Sep 13 00:53:00.849239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021-rootfs.mount: Deactivated successfully. Sep 13 00:53:00.853647 env[1756]: time="2025-09-13T00:53:00.853578690Z" level=info msg="shim disconnected" id=5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021 Sep 13 00:53:00.854173 env[1756]: time="2025-09-13T00:53:00.853654263Z" level=warning msg="cleaning up after shim disconnected" id=5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021 namespace=k8s.io Sep 13 00:53:00.854173 env[1756]: time="2025-09-13T00:53:00.853668423Z" level=info msg="cleaning up dead shim" Sep 13 00:53:00.865533 env[1756]: time="2025-09-13T00:53:00.865491362Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6759 runtime=io.containerd.runc.v2\n" Sep 13 00:53:00.895055 kubelet[2691]: I0913 00:53:00.894930 2691 scope.go:117] "RemoveContainer" containerID="5b158848ea6800b27c580dc2cb4c8ff68c854bb176f8f22af7debe879e583021" Sep 13 00:53:00.900442 env[1756]: time="2025-09-13T00:53:00.900390546Z" level=info msg="CreateContainer within sandbox \"773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:53:00.933120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953974793.mount: Deactivated successfully. Sep 13 00:53:00.941148 env[1756]: time="2025-09-13T00:53:00.941094967Z" level=info msg="CreateContainer within sandbox \"773d760c3de845c4b4fa714679f5dfc7b0b62cb0d491d9ececb5da13a7e7f026\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"22a685c3d644e00084468f14d44df87ec93666cb6da51156bba9f935328c1352\"" Sep 13 00:53:00.942076 env[1756]: time="2025-09-13T00:53:00.942043121Z" level=info msg="StartContainer for \"22a685c3d644e00084468f14d44df87ec93666cb6da51156bba9f935328c1352\"" Sep 13 00:53:01.038436 env[1756]: time="2025-09-13T00:53:01.038385584Z" level=info msg="StartContainer for \"22a685c3d644e00084468f14d44df87ec93666cb6da51156bba9f935328c1352\" returns successfully" Sep 13 00:53:01.438784 kubelet[2691]: E0913 00:53:01.438710 2691 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": context deadline exceeded" Sep 13 00:53:01.866314 systemd[1]: run-containerd-runc-k8s.io-22a685c3d644e00084468f14d44df87ec93666cb6da51156bba9f935328c1352-runc.fdcAVE.mount: Deactivated successfully. Sep 13 00:53:03.257405 systemd[1]: run-containerd-runc-k8s.io-478e0c07c316148c4da74df592e48194091720da467093fee0f514858cab9e32-runc.XudWlp.mount: Deactivated successfully. Sep 13 00:53:05.520639 kubelet[2691]: I0913 00:53:05.520577 2691 scope.go:117] "RemoveContainer" containerID="a2693c9ca9e86700b4bf7a9ac9e123d3fcf1cec53dab70ce27b43747415c4600" Sep 13 00:53:05.522162 kubelet[2691]: E0913 00:53:05.520864 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-172-31-30-243_kube-system(f46d22d2944f8fc600a4e65fcfb61ed6)\"" pod="kube-system/kube-controller-manager-ip-172-31-30-243" podUID="f46d22d2944f8fc600a4e65fcfb61ed6" Sep 13 00:53:08.628629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6-rootfs.mount: Deactivated successfully. Sep 13 00:53:08.632631 env[1756]: time="2025-09-13T00:53:08.632590627Z" level=info msg="shim disconnected" id=2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6 Sep 13 00:53:08.633131 env[1756]: time="2025-09-13T00:53:08.633109761Z" level=warning msg="cleaning up after shim disconnected" id=2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6 namespace=k8s.io Sep 13 00:53:08.633292 env[1756]: time="2025-09-13T00:53:08.633275274Z" level=info msg="cleaning up dead shim" Sep 13 00:53:08.641977 env[1756]: time="2025-09-13T00:53:08.641932446Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6842 runtime=io.containerd.runc.v2\n" Sep 13 00:53:08.682001 kubelet[2691]: I0913 00:53:08.681900 2691 scope.go:117] "RemoveContainer" containerID="afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d" Sep 13 00:53:08.684420 env[1756]: time="2025-09-13T00:53:08.683823122Z" level=info msg="RemoveContainer for \"afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d\"" Sep 13 00:53:08.689725 env[1756]: time="2025-09-13T00:53:08.689672428Z" level=info msg="RemoveContainer for \"afa86985a6f7c72d4dceafb97a4056dc0c0cd303dc942b73dea1d11ce1acf65d\" returns successfully" Sep 13 00:53:09.019502 kubelet[2691]: I0913 00:53:09.019463 2691 scope.go:117] "RemoveContainer" containerID="2c5e79832428084e162416ab26f4517331a092d727af42e08c86d6bd6ab22ab6" Sep 13 00:53:09.021427 kubelet[2691]: E0913 00:53:09.021348 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-pk74d_tigera-operator(18ca4f73-d8f8-4833-8a33-49c11730cffb)\"" pod="tigera-operator/tigera-operator-58fc44c59b-pk74d" podUID="18ca4f73-d8f8-4833-8a33-49c11730cffb" Sep 13 00:53:11.448093 kubelet[2691]: E0913 00:53:11.448039 2691 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-243?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"