May 17 00:35:44.013169 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:35:44.013205 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:44.013224 kernel: BIOS-provided physical RAM map: May 17 00:35:44.013235 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:35:44.013246 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:35:44.013257 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:35:44.013271 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:35:44.013283 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:35:44.013297 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:35:44.013309 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:35:44.013321 kernel: NX (Execute Disable) protection: active May 17 00:35:44.013333 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:35:44.013344 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:35:44.013356 kernel: extended physical RAM map: May 17 00:35:44.013374 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:35:44.013387 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable May 17 00:35:44.013399 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable May 17 00:35:44.013412 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable May 17 00:35:44.013423 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:35:44.013436 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:35:44.013446 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:35:44.013457 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:35:44.013469 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:35:44.013479 kernel: efi: EFI v2.70 by EDK II May 17 00:35:44.013492 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 May 17 00:35:44.013503 kernel: SMBIOS 2.7 present. May 17 00:35:44.013514 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:35:44.013523 kernel: Hypervisor detected: KVM May 17 00:35:44.013536 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:35:44.013551 kernel: kvm-clock: cpu 0, msr 4519a001, primary cpu clock May 17 00:35:44.013566 kernel: kvm-clock: using sched offset of 4293671217 cycles May 17 00:35:44.013581 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:35:44.013599 kernel: tsc: Detected 2499.998 MHz processor May 17 00:35:44.013611 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:35:44.013627 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:35:44.013646 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:35:44.013657 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:35:44.013668 kernel: Using GB pages for direct mapping May 17 00:35:44.013679 kernel: Secure boot disabled May 17 00:35:44.013691 kernel: ACPI: Early table checksum verification disabled May 17 00:35:44.013707 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:35:44.018791 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:35:44.018815 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:35:44.018830 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:35:44.018844 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:35:44.018858 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:35:44.018872 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:35:44.018886 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:35:44.018899 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:35:44.018915 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:35:44.018929 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:35:44.018943 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:35:44.018956 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:35:44.018970 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:35:44.018983 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:35:44.018997 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:35:44.019011 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:35:44.019025 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:35:44.019041 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:35:44.019055 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:35:44.019068 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:35:44.019082 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:35:44.019097 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:35:44.019110 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:35:44.019124 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:35:44.019137 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:35:44.019150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:35:44.019167 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:35:44.019181 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:35:44.019195 kernel: Zone ranges: May 17 00:35:44.019209 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:35:44.019222 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:35:44.019235 kernel: Normal empty May 17 00:35:44.019248 kernel: Movable zone start for each node May 17 00:35:44.019262 kernel: Early memory node ranges May 17 00:35:44.019276 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:35:44.019292 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:35:44.019306 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:35:44.019319 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:35:44.019332 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:35:44.019345 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:35:44.019358 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:35:44.019372 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:35:44.019386 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:35:44.019400 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:35:44.019423 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:35:44.019436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:35:44.019450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:35:44.019463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:35:44.019477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:35:44.019489 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:35:44.019502 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:35:44.019515 kernel: TSC deadline timer available May 17 00:35:44.019529 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:35:44.019546 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:35:44.019560 kernel: Booting paravirtualized kernel on KVM May 17 00:35:44.019574 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:35:44.019588 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:35:44.019601 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:35:44.019614 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:35:44.019627 kernel: pcpu-alloc: [0] 0 1 May 17 00:35:44.019639 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 May 17 00:35:44.019653 kernel: kvm-guest: PV spinlocks enabled May 17 00:35:44.019670 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:35:44.019684 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:35:44.019697 kernel: Policy zone: DMA32 May 17 00:35:44.019725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:44.019740 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:35:44.019754 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:35:44.019767 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:35:44.019782 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:35:44.019799 kernel: Memory: 1876640K/2037804K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 160904K reserved, 0K cma-reserved) May 17 00:35:44.019813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:35:44.019826 kernel: Kernel/User page tables isolation: enabled May 17 00:35:44.019839 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:35:44.019852 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:35:44.019866 kernel: rcu: Hierarchical RCU implementation. May 17 00:35:44.019881 kernel: rcu: RCU event tracing is enabled. May 17 00:35:44.019908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:35:44.019923 kernel: Rude variant of Tasks RCU enabled. May 17 00:35:44.019938 kernel: Tracing variant of Tasks RCU enabled. May 17 00:35:44.019952 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:35:44.019966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:35:44.019982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:35:44.019996 kernel: random: crng init done May 17 00:35:44.020010 kernel: Console: colour dummy device 80x25 May 17 00:35:44.020025 kernel: printk: console [tty0] enabled May 17 00:35:44.020040 kernel: printk: console [ttyS0] enabled May 17 00:35:44.020054 kernel: ACPI: Core revision 20210730 May 17 00:35:44.020068 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:35:44.020085 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:35:44.020099 kernel: x2apic enabled May 17 00:35:44.020113 kernel: Switched APIC routing to physical x2apic. May 17 00:35:44.020128 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 17 00:35:44.020143 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 17 00:35:44.020157 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:35:44.020172 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:35:44.020189 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:35:44.020203 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:35:44.020217 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:35:44.020231 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:35:44.020246 kernel: RETBleed: Vulnerable May 17 00:35:44.020260 kernel: Speculative Store Bypass: Vulnerable May 17 00:35:44.020274 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:35:44.020288 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:35:44.020301 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:35:44.020315 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:35:44.020329 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:35:44.020346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:35:44.020361 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:35:44.020393 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:35:44.020407 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:35:44.020421 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:35:44.020435 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:35:44.020448 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:35:44.020461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:35:44.020475 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:35:44.020490 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:35:44.020507 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:35:44.020521 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:35:44.020535 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:35:44.020549 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:35:44.020563 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:35:44.020577 kernel: Freeing SMP alternatives memory: 32K May 17 00:35:44.020591 kernel: pid_max: default: 32768 minimum: 301 May 17 00:35:44.020606 kernel: LSM: Security Framework initializing May 17 00:35:44.020620 kernel: SELinux: Initializing. May 17 00:35:44.020634 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:35:44.020648 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:35:44.020663 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:35:44.020679 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:35:44.020693 kernel: signal: max sigframe size: 3632 May 17 00:35:44.020707 kernel: rcu: Hierarchical SRCU implementation. May 17 00:35:44.028839 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:35:44.028865 kernel: smp: Bringing up secondary CPUs ... May 17 00:35:44.028881 kernel: x86: Booting SMP configuration: May 17 00:35:44.028895 kernel: .... node #0, CPUs: #1 May 17 00:35:44.028910 kernel: kvm-clock: cpu 1, msr 4519a041, secondary cpu clock May 17 00:35:44.028924 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 May 17 00:35:44.028946 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:35:44.028963 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:35:44.028978 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:35:44.028993 kernel: smpboot: Max logical packages: 1 May 17 00:35:44.029008 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 17 00:35:44.029023 kernel: devtmpfs: initialized May 17 00:35:44.029037 kernel: x86/mm: Memory block size: 128MB May 17 00:35:44.029052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:35:44.029066 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:35:44.029084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:35:44.029099 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:35:44.029113 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:35:44.029127 kernel: audit: initializing netlink subsys (disabled) May 17 00:35:44.029141 kernel: audit: type=2000 audit(1747442143.414:1): state=initialized audit_enabled=0 res=1 May 17 00:35:44.029156 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:35:44.029170 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:35:44.029185 kernel: cpuidle: using governor menu May 17 00:35:44.029199 kernel: ACPI: bus type PCI registered May 17 00:35:44.029216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:35:44.029231 kernel: dca service started, version 1.12.1 May 17 00:35:44.029246 kernel: PCI: Using configuration type 1 for base access May 17 00:35:44.029262 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:35:44.029276 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:35:44.029290 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:35:44.029306 kernel: ACPI: Added _OSI(Module Device) May 17 00:35:44.029321 kernel: ACPI: Added _OSI(Processor Device) May 17 00:35:44.029337 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:35:44.029356 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:35:44.029370 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:35:44.029384 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:35:44.029399 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:35:44.029413 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:35:44.029427 kernel: ACPI: Interpreter enabled May 17 00:35:44.029440 kernel: ACPI: PM: (supports S0 S5) May 17 00:35:44.029454 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:35:44.029469 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:35:44.029486 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:35:44.029500 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:35:44.038355 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:35:44.038558 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:35:44.038578 kernel: acpiphp: Slot [3] registered May 17 00:35:44.038594 kernel: acpiphp: Slot [4] registered May 17 00:35:44.038608 kernel: acpiphp: Slot [5] registered May 17 00:35:44.038630 kernel: acpiphp: Slot [6] registered May 17 00:35:44.038644 kernel: acpiphp: Slot [7] registered May 17 00:35:44.038659 kernel: acpiphp: Slot [8] registered May 17 00:35:44.038673 kernel: acpiphp: Slot [9] registered May 17 00:35:44.038688 kernel: acpiphp: Slot [10] registered May 17 00:35:44.038703 kernel: acpiphp: Slot [11] registered May 17 00:35:44.038729 kernel: acpiphp: Slot [12] registered May 17 00:35:44.038740 kernel: acpiphp: Slot [13] registered May 17 00:35:44.038752 kernel: acpiphp: Slot [14] registered May 17 00:35:44.038767 kernel: acpiphp: Slot [15] registered May 17 00:35:44.038784 kernel: acpiphp: Slot [16] registered May 17 00:35:44.038799 kernel: acpiphp: Slot [17] registered May 17 00:35:44.038814 kernel: acpiphp: Slot [18] registered May 17 00:35:44.038828 kernel: acpiphp: Slot [19] registered May 17 00:35:44.038843 kernel: acpiphp: Slot [20] registered May 17 00:35:44.038857 kernel: acpiphp: Slot [21] registered May 17 00:35:44.038872 kernel: acpiphp: Slot [22] registered May 17 00:35:44.038886 kernel: acpiphp: Slot [23] registered May 17 00:35:44.038901 kernel: acpiphp: Slot [24] registered May 17 00:35:44.038918 kernel: acpiphp: Slot [25] registered May 17 00:35:44.038933 kernel: acpiphp: Slot [26] registered May 17 00:35:44.038947 kernel: acpiphp: Slot [27] registered May 17 00:35:44.038962 kernel: acpiphp: Slot [28] registered May 17 00:35:44.038977 kernel: acpiphp: Slot [29] registered May 17 00:35:44.038991 kernel: acpiphp: Slot [30] registered May 17 00:35:44.039005 kernel: acpiphp: Slot [31] registered May 17 00:35:44.039020 kernel: PCI host bridge to bus 0000:00 May 17 00:35:44.039149 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:35:44.039263 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:35:44.039370 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:35:44.039508 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:35:44.039760 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:35:44.039896 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:35:44.040050 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:35:44.040204 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:35:44.040354 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:35:44.040486 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:35:44.040617 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:35:44.040766 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:35:44.040900 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:35:44.041031 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:35:44.041172 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:35:44.041304 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:35:44.041446 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:35:44.041579 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:35:44.041711 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:35:44.041868 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:35:44.042000 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:35:44.042146 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:35:44.042283 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:35:44.042423 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:35:44.042554 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:35:44.042574 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:35:44.042590 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:35:44.042605 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:35:44.042624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:35:44.042639 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:35:44.042655 kernel: iommu: Default domain type: Translated May 17 00:35:44.042670 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:35:44.042816 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:35:44.042947 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:35:44.043078 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:35:44.043097 kernel: vgaarb: loaded May 17 00:35:44.043116 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:35:44.043132 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:35:44.043147 kernel: PTP clock support registered May 17 00:35:44.043162 kernel: Registered efivars operations May 17 00:35:44.043177 kernel: PCI: Using ACPI for IRQ routing May 17 00:35:44.043192 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:35:44.043208 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] May 17 00:35:44.043223 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:35:44.043237 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:35:44.043252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:35:44.043270 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:35:44.043285 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:35:44.043301 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:35:44.043317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:35:44.043332 kernel: pnp: PnP ACPI init May 17 00:35:44.043346 kernel: pnp: PnP ACPI: found 5 devices May 17 00:35:44.043361 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:35:44.043377 kernel: NET: Registered PF_INET protocol family May 17 00:35:44.043395 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:35:44.043411 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:35:44.043439 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:35:44.043454 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:35:44.043469 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:35:44.043484 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:35:44.043499 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:35:44.043514 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:35:44.043529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:35:44.043547 kernel: NET: Registered PF_XDP protocol family May 17 00:35:44.043688 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:35:44.043828 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:35:44.043940 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:35:44.044053 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:35:44.044164 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:35:44.044296 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:35:44.044424 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:35:44.044444 kernel: PCI: CLS 0 bytes, default 64 May 17 00:35:44.044457 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:35:44.044471 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 17 00:35:44.044484 kernel: clocksource: Switched to clocksource tsc May 17 00:35:44.044497 kernel: Initialise system trusted keyrings May 17 00:35:44.044510 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:35:44.044524 kernel: Key type asymmetric registered May 17 00:35:44.044537 kernel: Asymmetric key parser 'x509' registered May 17 00:35:44.044549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:35:44.044565 kernel: io scheduler mq-deadline registered May 17 00:35:44.044577 kernel: io scheduler kyber registered May 17 00:35:44.044590 kernel: io scheduler bfq registered May 17 00:35:44.044603 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:35:44.044615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:35:44.044627 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:35:44.044640 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:35:44.044653 kernel: i8042: Warning: Keylock active May 17 00:35:44.044665 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:35:44.044679 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:35:44.044821 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:35:44.044934 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:35:44.045044 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:35:43 UTC (1747442143) May 17 00:35:44.045156 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:35:44.045171 kernel: intel_pstate: CPU model not supported May 17 00:35:44.045184 kernel: efifb: probing for efifb May 17 00:35:44.045196 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 17 00:35:44.045213 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:35:44.045225 kernel: efifb: scrolling: redraw May 17 00:35:44.045239 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:35:44.045252 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:35:44.045265 kernel: fb0: EFI VGA frame buffer device May 17 00:35:44.045277 kernel: pstore: Registered efi as persistent store backend May 17 00:35:44.045310 kernel: NET: Registered PF_INET6 protocol family May 17 00:35:44.045326 kernel: Segment Routing with IPv6 May 17 00:35:44.045339 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:35:44.045357 kernel: NET: Registered PF_PACKET protocol family May 17 00:35:44.045371 kernel: Key type dns_resolver registered May 17 00:35:44.045384 kernel: IPI shorthand broadcast: enabled May 17 00:35:44.045397 kernel: sched_clock: Marking stable (356002948, 127585487)->(564175272, -80586837) May 17 00:35:44.045411 kernel: registered taskstats version 1 May 17 00:35:44.045425 kernel: Loading compiled-in X.509 certificates May 17 00:35:44.045438 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:35:44.045452 kernel: Key type .fscrypt registered May 17 00:35:44.045465 kernel: Key type fscrypt-provisioning registered May 17 00:35:44.045481 kernel: pstore: Using crash dump compression: deflate May 17 00:35:44.045495 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:35:44.045508 kernel: ima: Allocated hash algorithm: sha1 May 17 00:35:44.045522 kernel: ima: No architecture policies found May 17 00:35:44.045536 kernel: clk: Disabling unused clocks May 17 00:35:44.045548 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:35:44.045562 kernel: Write protecting the kernel read-only data: 28672k May 17 00:35:44.045575 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:35:44.045588 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:35:44.045604 kernel: Run /init as init process May 17 00:35:44.045617 kernel: with arguments: May 17 00:35:44.045630 kernel: /init May 17 00:35:44.045643 kernel: with environment: May 17 00:35:44.045656 kernel: HOME=/ May 17 00:35:44.045670 kernel: TERM=linux May 17 00:35:44.045684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:35:44.045701 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:35:44.045731 systemd[1]: Detected virtualization amazon. May 17 00:35:44.045746 systemd[1]: Detected architecture x86-64. May 17 00:35:44.045759 systemd[1]: Running in initrd. May 17 00:35:44.045773 systemd[1]: No hostname configured, using default hostname. May 17 00:35:44.045787 systemd[1]: Hostname set to . May 17 00:35:44.045801 systemd[1]: Initializing machine ID from VM UUID. May 17 00:35:44.045814 systemd[1]: Queued start job for default target initrd.target. May 17 00:35:44.045831 systemd[1]: Started systemd-ask-password-console.path. May 17 00:35:44.045848 systemd[1]: Reached target cryptsetup.target. May 17 00:35:44.045861 systemd[1]: Reached target paths.target. May 17 00:35:44.045875 systemd[1]: Reached target slices.target. May 17 00:35:44.045888 systemd[1]: Reached target swap.target. May 17 00:35:44.045902 systemd[1]: Reached target timers.target. May 17 00:35:44.045920 systemd[1]: Listening on iscsid.socket. May 17 00:35:44.045934 systemd[1]: Listening on iscsiuio.socket. May 17 00:35:44.045948 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:35:44.045961 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:35:44.045975 systemd[1]: Listening on systemd-journald.socket. May 17 00:35:44.045988 systemd[1]: Listening on systemd-networkd.socket. May 17 00:35:44.046003 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:35:44.046020 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:35:44.046034 systemd[1]: Reached target sockets.target. May 17 00:35:44.046047 systemd[1]: Starting kmod-static-nodes.service... May 17 00:35:44.046062 systemd[1]: Finished network-cleanup.service. May 17 00:35:44.046076 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:35:44.046090 systemd[1]: Starting systemd-journald.service... May 17 00:35:44.046104 systemd[1]: Starting systemd-modules-load.service... May 17 00:35:44.046118 systemd[1]: Starting systemd-resolved.service... May 17 00:35:44.046132 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:35:44.046148 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:35:44.046163 systemd[1]: Finished kmod-static-nodes.service. May 17 00:35:44.046177 kernel: audit: type=1130 audit(1747442144.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.046192 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:35:44.046206 kernel: audit: type=1130 audit(1747442144.040:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.046229 systemd-journald[185]: Journal started May 17 00:35:44.046308 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2edfb936a881fdce0df5668e38922c) is 4.8M, max 38.3M, 33.5M free. May 17 00:35:44.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.031636 systemd-resolved[187]: Positive Trust Anchors: May 17 00:35:44.058406 systemd[1]: Started systemd-resolved.service. May 17 00:35:44.058454 systemd[1]: Started systemd-journald.service. May 17 00:35:44.058474 kernel: audit: type=1130 audit(1747442144.049:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.031652 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:35:44.031710 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:35:44.035562 systemd-resolved[187]: Defaulting to hostname 'linux'. May 17 00:35:44.082243 kernel: audit: type=1130 audit(1747442144.065:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.082285 kernel: audit: type=1130 audit(1747442144.067:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.047045 systemd-modules-load[186]: Inserted module 'overlay' May 17 00:35:44.067087 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:35:44.068373 systemd[1]: Reached target nss-lookup.target. May 17 00:35:44.088132 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:35:44.090470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:35:44.112697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:35:44.112784 kernel: audit: type=1130 audit(1747442144.111:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.111400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:35:44.128501 systemd-modules-load[186]: Inserted module 'br_netfilter' May 17 00:35:44.129771 kernel: Bridge firewalling registered May 17 00:35:44.128801 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:35:44.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.137730 kernel: audit: type=1130 audit(1747442144.131:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.137850 systemd[1]: Starting dracut-cmdline.service... May 17 00:35:44.152191 dracut-cmdline[203]: dracut-dracut-053 May 17 00:35:44.156835 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:44.168967 kernel: SCSI subsystem initialized May 17 00:35:44.187857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:35:44.187932 kernel: device-mapper: uevent: version 1.0.3 May 17 00:35:44.190476 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:35:44.195119 systemd-modules-load[186]: Inserted module 'dm_multipath' May 17 00:35:44.196073 systemd[1]: Finished systemd-modules-load.service. May 17 00:35:44.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.199693 systemd[1]: Starting systemd-sysctl.service... May 17 00:35:44.207316 kernel: audit: type=1130 audit(1747442144.197:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.213870 systemd[1]: Finished systemd-sysctl.service. May 17 00:35:44.221804 kernel: audit: type=1130 audit(1747442144.214:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.252751 kernel: Loading iSCSI transport class v2.0-870. May 17 00:35:44.271752 kernel: iscsi: registered transport (tcp) May 17 00:35:44.297255 kernel: iscsi: registered transport (qla4xxx) May 17 00:35:44.297341 kernel: QLogic iSCSI HBA Driver May 17 00:35:44.329837 systemd[1]: Finished dracut-cmdline.service. May 17 00:35:44.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.331969 systemd[1]: Starting dracut-pre-udev.service... May 17 00:35:44.384779 kernel: raid6: avx512x4 gen() 18273 MB/s May 17 00:35:44.402771 kernel: raid6: avx512x4 xor() 7278 MB/s May 17 00:35:44.420779 kernel: raid6: avx512x2 gen() 17348 MB/s May 17 00:35:44.438746 kernel: raid6: avx512x2 xor() 24361 MB/s May 17 00:35:44.456758 kernel: raid6: avx512x1 gen() 18024 MB/s May 17 00:35:44.474763 kernel: raid6: avx512x1 xor() 21891 MB/s May 17 00:35:44.492757 kernel: raid6: avx2x4 gen() 18142 MB/s May 17 00:35:44.510765 kernel: raid6: avx2x4 xor() 6895 MB/s May 17 00:35:44.528764 kernel: raid6: avx2x2 gen() 18205 MB/s May 17 00:35:44.546751 kernel: raid6: avx2x2 xor() 18218 MB/s May 17 00:35:44.564748 kernel: raid6: avx2x1 gen() 13514 MB/s May 17 00:35:44.582745 kernel: raid6: avx2x1 xor() 15727 MB/s May 17 00:35:44.600742 kernel: raid6: sse2x4 gen() 9553 MB/s May 17 00:35:44.618753 kernel: raid6: sse2x4 xor() 5568 MB/s May 17 00:35:44.636747 kernel: raid6: sse2x2 gen() 10451 MB/s May 17 00:35:44.654759 kernel: raid6: sse2x2 xor() 6242 MB/s May 17 00:35:44.672749 kernel: raid6: sse2x1 gen() 9418 MB/s May 17 00:35:44.690925 kernel: raid6: sse2x1 xor() 4716 MB/s May 17 00:35:44.691002 kernel: raid6: using algorithm avx512x4 gen() 18273 MB/s May 17 00:35:44.691032 kernel: raid6: .... xor() 7278 MB/s, rmw enabled May 17 00:35:44.692011 kernel: raid6: using avx512x2 recovery algorithm May 17 00:35:44.706748 kernel: xor: automatically using best checksumming function avx May 17 00:35:44.813751 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:35:44.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.823000 audit: BPF prog-id=7 op=LOAD May 17 00:35:44.823000 audit: BPF prog-id=8 op=LOAD May 17 00:35:44.823106 systemd[1]: Finished dracut-pre-udev.service. May 17 00:35:44.824634 systemd[1]: Starting systemd-udevd.service... May 17 00:35:44.839032 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 17 00:35:44.844793 systemd[1]: Started systemd-udevd.service. May 17 00:35:44.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.847971 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:35:44.868049 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation May 17 00:35:44.901612 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:35:44.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.903397 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:35:44.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.950923 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:35:45.012744 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:35:45.043688 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:35:45.043992 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:35:45.054957 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:35:45.059576 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:35:45.078408 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:35:45.078540 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:35:45.078649 kernel: AES CTR mode by8 optimization enabled May 17 00:35:45.078662 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:35:45.078794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:35:45.078810 kernel: GPT:9289727 != 16777215 May 17 00:35:45.078826 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:35:45.078842 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:43:7a:ed:43:6f May 17 00:35:45.078997 kernel: GPT:9289727 != 16777215 May 17 00:35:45.079013 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:35:45.079030 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:35:45.083418 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line. May 17 00:35:45.149744 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (433) May 17 00:35:45.160129 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:35:45.198482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:35:45.206764 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:35:45.215485 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:35:45.216298 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:35:45.218824 systemd[1]: Starting disk-uuid.service... May 17 00:35:45.225469 disk-uuid[593]: Primary Header is updated. May 17 00:35:45.225469 disk-uuid[593]: Secondary Entries is updated. May 17 00:35:45.225469 disk-uuid[593]: Secondary Header is updated. May 17 00:35:45.232745 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:35:45.239747 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:35:46.246506 disk-uuid[594]: The operation has completed successfully. May 17 00:35:46.247878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:35:46.359371 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:35:46.359498 systemd[1]: Finished disk-uuid.service. May 17 00:35:46.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.366671 systemd[1]: Starting verity-setup.service... May 17 00:35:46.387147 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:35:46.492347 systemd[1]: Found device dev-mapper-usr.device. May 17 00:35:46.494577 systemd[1]: Mounting sysusr-usr.mount... May 17 00:35:46.498186 systemd[1]: Finished verity-setup.service. May 17 00:35:46.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.590747 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:35:46.591239 systemd[1]: Mounted sysusr-usr.mount. May 17 00:35:46.592294 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:35:46.593423 systemd[1]: Starting ignition-setup.service... May 17 00:35:46.598304 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:35:46.619257 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:46.619343 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:35:46.619363 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:35:46.640741 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:35:46.654047 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:35:46.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.666495 systemd[1]: Finished ignition-setup.service. May 17 00:35:46.668406 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:35:46.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.692105 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:35:46.693000 audit: BPF prog-id=9 op=LOAD May 17 00:35:46.694522 systemd[1]: Starting systemd-networkd.service... May 17 00:35:46.718597 systemd-networkd[1106]: lo: Link UP May 17 00:35:46.718610 systemd-networkd[1106]: lo: Gained carrier May 17 00:35:46.719631 systemd-networkd[1106]: Enumeration completed May 17 00:35:46.719773 systemd[1]: Started systemd-networkd.service. May 17 00:35:46.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.720091 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:35:46.723330 systemd[1]: Reached target network.target. May 17 00:35:46.724434 systemd-networkd[1106]: eth0: Link UP May 17 00:35:46.724441 systemd-networkd[1106]: eth0: Gained carrier May 17 00:35:46.727962 systemd[1]: Starting iscsiuio.service... May 17 00:35:46.735558 systemd[1]: Started iscsiuio.service. May 17 00:35:46.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.738149 systemd[1]: Starting iscsid.service... May 17 00:35:46.740968 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.26.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:35:46.745322 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:35:46.745322 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:35:46.745322 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:35:46.745322 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:35:46.745322 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:35:46.745322 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:35:46.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.745957 systemd[1]: Started iscsid.service. May 17 00:35:46.749535 systemd[1]: Starting dracut-initqueue.service... May 17 00:35:46.765672 systemd[1]: Finished dracut-initqueue.service. May 17 00:35:46.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:46.766533 systemd[1]: Reached target remote-fs-pre.target. May 17 00:35:46.767862 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:35:46.769090 systemd[1]: Reached target remote-fs.target. May 17 00:35:46.771697 systemd[1]: Starting dracut-pre-mount.service... May 17 00:35:46.781272 systemd[1]: Finished dracut-pre-mount.service. May 17 00:35:46.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.211473 ignition[1080]: Ignition 2.14.0 May 17 00:35:47.211482 ignition[1080]: Stage: fetch-offline May 17 00:35:47.211604 ignition[1080]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.211637 ignition[1080]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.225133 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.225544 ignition[1080]: Ignition finished successfully May 17 00:35:47.226554 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:35:47.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.228298 systemd[1]: Starting ignition-fetch.service... May 17 00:35:47.237581 ignition[1130]: Ignition 2.14.0 May 17 00:35:47.237597 ignition[1130]: Stage: fetch May 17 00:35:47.237836 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.237870 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.246310 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.247351 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.263788 ignition[1130]: INFO : PUT result: OK May 17 00:35:47.266529 ignition[1130]: DEBUG : parsed url from cmdline: "" May 17 00:35:47.266529 ignition[1130]: INFO : no config URL provided May 17 00:35:47.266529 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" May 17 00:35:47.266529 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" May 17 00:35:47.270276 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.270276 ignition[1130]: INFO : PUT result: OK May 17 00:35:47.270276 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:35:47.270276 ignition[1130]: INFO : GET result: OK May 17 00:35:47.270276 ignition[1130]: DEBUG : parsing config with SHA512: 0b7085994a0a7a969bb9ad3ccb429994d6bf92ae9cd49b8eb459987af92b968952550be723de6d90028e31bba48beacb2eadfa03cf5a05d643efc9a4b3d78d79 May 17 00:35:47.279373 unknown[1130]: fetched base config from "system" May 17 00:35:47.279385 unknown[1130]: fetched base config from "system" May 17 00:35:47.279392 unknown[1130]: fetched user config from "aws" May 17 00:35:47.280517 ignition[1130]: fetch: fetch complete May 17 00:35:47.280523 ignition[1130]: fetch: fetch passed May 17 00:35:47.281960 systemd[1]: Finished ignition-fetch.service. May 17 00:35:47.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.280586 ignition[1130]: Ignition finished successfully May 17 00:35:47.283670 systemd[1]: Starting ignition-kargs.service... May 17 00:35:47.294232 ignition[1136]: Ignition 2.14.0 May 17 00:35:47.294245 ignition[1136]: Stage: kargs May 17 00:35:47.294460 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.294495 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.302215 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.303122 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.304103 ignition[1136]: INFO : PUT result: OK May 17 00:35:47.306977 ignition[1136]: kargs: kargs passed May 17 00:35:47.307056 ignition[1136]: Ignition finished successfully May 17 00:35:47.308936 systemd[1]: Finished ignition-kargs.service. May 17 00:35:47.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.310805 systemd[1]: Starting ignition-disks.service... May 17 00:35:47.320474 ignition[1142]: Ignition 2.14.0 May 17 00:35:47.320487 ignition[1142]: Stage: disks May 17 00:35:47.320706 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.320788 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.328330 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.329221 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.330047 ignition[1142]: INFO : PUT result: OK May 17 00:35:47.332090 ignition[1142]: disks: disks passed May 17 00:35:47.332159 ignition[1142]: Ignition finished successfully May 17 00:35:47.334082 systemd[1]: Finished ignition-disks.service. May 17 00:35:47.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.334792 systemd[1]: Reached target initrd-root-device.target. May 17 00:35:47.335842 systemd[1]: Reached target local-fs-pre.target. May 17 00:35:47.336756 systemd[1]: Reached target local-fs.target. May 17 00:35:47.337633 systemd[1]: Reached target sysinit.target. May 17 00:35:47.338576 systemd[1]: Reached target basic.target. May 17 00:35:47.340815 systemd[1]: Starting systemd-fsck-root.service... May 17 00:35:47.378198 systemd-fsck[1150]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:35:47.381149 systemd[1]: Finished systemd-fsck-root.service. May 17 00:35:47.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.382597 systemd[1]: Mounting sysroot.mount... May 17 00:35:47.400922 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:35:47.400392 systemd[1]: Mounted sysroot.mount. May 17 00:35:47.401754 systemd[1]: Reached target initrd-root-fs.target. May 17 00:35:47.412766 systemd[1]: Mounting sysroot-usr.mount... May 17 00:35:47.414538 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:35:47.414602 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:35:47.414640 systemd[1]: Reached target ignition-diskful.target. May 17 00:35:47.419010 systemd[1]: Mounted sysroot-usr.mount. May 17 00:35:47.423355 systemd[1]: Starting initrd-setup-root.service... May 17 00:35:47.435987 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:35:47.464069 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory May 17 00:35:47.468259 initrd-setup-root[1187]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:35:47.472665 initrd-setup-root[1195]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:35:47.523179 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:35:47.542743 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1203) May 17 00:35:47.547209 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:47.547287 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:35:47.547308 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:35:47.567756 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:35:47.576834 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:35:47.646630 systemd[1]: Finished initrd-setup-root.service. May 17 00:35:47.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.648736 systemd[1]: Starting ignition-mount.service... May 17 00:35:47.652890 systemd[1]: Starting sysroot-boot.service... May 17 00:35:47.661251 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:35:47.661393 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:35:47.677155 ignition[1232]: INFO : Ignition 2.14.0 May 17 00:35:47.678500 ignition[1232]: INFO : Stage: mount May 17 00:35:47.680213 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.682485 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.694970 systemd[1]: Finished sysroot-boot.service. May 17 00:35:47.696993 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.696993 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.699122 ignition[1232]: INFO : PUT result: OK May 17 00:35:47.702019 ignition[1232]: INFO : mount: mount passed May 17 00:35:47.702983 ignition[1232]: INFO : Ignition finished successfully May 17 00:35:47.703306 systemd[1]: Finished ignition-mount.service. May 17 00:35:47.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.705762 systemd[1]: Starting ignition-files.service... May 17 00:35:47.714701 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:35:47.733761 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1242) May 17 00:35:47.737503 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:47.737576 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:35:47.737591 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:35:47.752750 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:35:47.756686 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:35:47.768619 ignition[1261]: INFO : Ignition 2.14.0 May 17 00:35:47.768619 ignition[1261]: INFO : Stage: files May 17 00:35:47.770940 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.770940 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:47.778140 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:47.779152 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:47.780095 ignition[1261]: INFO : PUT result: OK May 17 00:35:47.783145 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping May 17 00:35:47.789134 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:35:47.789134 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:35:47.809977 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:35:47.811804 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:35:47.813625 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:35:47.813625 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:35:47.813625 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:35:47.813625 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:35:47.813625 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:35:47.812005 unknown[1261]: wrote ssh authorized keys file for user: core May 17 00:35:47.917625 ignition[1261]: INFO : GET result: OK May 17 00:35:48.161967 systemd-networkd[1106]: eth0: Gained IPv6LL May 17 00:35:48.393872 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:35:48.393872 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:35:48.405755 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:35:48.405755 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:35:48.405755 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:35:48.405755 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:35:48.405755 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:48.418360 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1036561029" May 17 00:35:48.418360 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1036561029": device or resource busy May 17 00:35:48.418360 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1036561029", trying btrfs: device or resource busy May 17 00:35:48.418360 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1036561029" May 17 00:35:48.418360 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1036561029" May 17 00:35:48.429723 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem1036561029" May 17 00:35:48.430634 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem1036561029" May 17 00:35:48.430634 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:35:48.430634 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:35:48.430634 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:35:48.430634 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:35:48.430634 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:35:48.438512 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" May 17 00:35:48.438512 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:35:48.438512 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:35:48.438512 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:35:48.438512 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:35:48.438512 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:48.447222 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2561914952" May 17 00:35:48.447222 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2561914952": device or resource busy May 17 00:35:48.447222 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2561914952", trying btrfs: device or resource busy May 17 00:35:48.447222 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2561914952" May 17 00:35:48.447222 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2561914952" May 17 00:35:48.447222 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem2561914952" May 17 00:35:48.447222 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem2561914952" May 17 00:35:48.447222 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:35:48.447222 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:35:48.447222 ignition[1261]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:35:49.176846 ignition[1261]: INFO : GET result: OK May 17 00:35:49.606792 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:35:49.608851 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:35:49.611608 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:49.617677 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178590729" May 17 00:35:49.627332 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178590729": device or resource busy May 17 00:35:49.627332 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4178590729", trying btrfs: device or resource busy May 17 00:35:49.627332 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178590729" May 17 00:35:49.627332 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4178590729" May 17 00:35:49.627332 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem4178590729" May 17 00:35:49.627332 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem4178590729" May 17 00:35:49.627332 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:35:49.627332 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:35:49.627332 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:49.654607 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem588807847" May 17 00:35:49.654607 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem588807847": device or resource busy May 17 00:35:49.654607 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem588807847", trying btrfs: device or resource busy May 17 00:35:49.654607 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem588807847" May 17 00:35:49.654607 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem588807847" May 17 00:35:49.654607 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem588807847" May 17 00:35:49.627892 systemd[1]: mnt-oem4178590729.mount: Deactivated successfully. May 17 00:35:49.673570 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem588807847" May 17 00:35:49.673570 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(10): [started] processing unit "nvidia.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(10): [finished] processing unit "nvidia.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(14): [started] processing unit "containerd.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(14): [finished] processing unit "containerd.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(16): [started] processing unit "prepare-helm.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" May 17 00:35:49.673570 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:35:49.673570 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:35:49.759816 kernel: kauditd_printk_skb: 26 callbacks suppressed May 17 00:35:49.759855 kernel: audit: type=1130 audit(1747442149.678:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.759877 kernel: audit: type=1130 audit(1747442149.699:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.759898 kernel: audit: type=1131 audit(1747442149.699:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.759930 kernel: audit: type=1130 audit(1747442149.713:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.759952 kernel: audit: type=1130 audit(1747442149.747:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.759972 kernel: audit: type=1131 audit(1747442149.747:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.655900 systemd[1]: mnt-oem588807847.mount: Deactivated successfully. May 17 00:35:49.761491 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" May 17 00:35:49.761491 ignition[1261]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:35:49.761491 ignition[1261]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:35:49.761491 ignition[1261]: INFO : files: files passed May 17 00:35:49.761491 ignition[1261]: INFO : Ignition finished successfully May 17 00:35:49.793167 kernel: audit: type=1130 audit(1747442149.782:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.677770 systemd[1]: Finished ignition-files.service. May 17 00:35:49.689274 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:35:49.796068 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:35:49.693309 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:35:49.694367 systemd[1]: Starting ignition-quench.service... May 17 00:35:49.698853 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:35:49.813865 kernel: audit: type=1130 audit(1747442149.802:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.813906 kernel: audit: type=1131 audit(1747442149.802:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.699006 systemd[1]: Finished ignition-quench.service. May 17 00:35:49.712585 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:35:49.714326 systemd[1]: Reached target ignition-complete.target. May 17 00:35:49.723590 systemd[1]: Starting initrd-parse-etc.service... May 17 00:35:49.824927 kernel: audit: type=1131 audit(1747442149.818:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.746191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:35:49.746340 systemd[1]: Finished initrd-parse-etc.service. May 17 00:35:49.748559 systemd[1]: Reached target initrd-fs.target. May 17 00:35:49.760753 systemd[1]: Reached target initrd.target. May 17 00:35:49.762368 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:35:49.763831 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:35:49.780829 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:35:49.784651 systemd[1]: Starting initrd-cleanup.service... May 17 00:35:49.801694 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:35:49.801835 systemd[1]: Finished initrd-cleanup.service. May 17 00:35:49.804652 systemd[1]: Stopped target nss-lookup.target. May 17 00:35:49.814806 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:35:49.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.816454 systemd[1]: Stopped target timers.target. May 17 00:35:49.818100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:35:49.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.818189 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:35:49.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.819654 systemd[1]: Stopped target initrd.target. May 17 00:35:49.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.825694 systemd[1]: Stopped target basic.target. May 17 00:35:49.827055 systemd[1]: Stopped target ignition-complete.target. May 17 00:35:49.828456 systemd[1]: Stopped target ignition-diskful.target. May 17 00:35:49.829746 systemd[1]: Stopped target initrd-root-device.target. May 17 00:35:49.830990 systemd[1]: Stopped target remote-fs.target. May 17 00:35:49.832982 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:35:49.834246 systemd[1]: Stopped target sysinit.target. May 17 00:35:49.835456 systemd[1]: Stopped target local-fs.target. May 17 00:35:49.836761 systemd[1]: Stopped target local-fs-pre.target. May 17 00:35:49.837948 systemd[1]: Stopped target swap.target. May 17 00:35:49.839099 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:35:49.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.839191 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:35:49.861377 ignition[1299]: INFO : Ignition 2.14.0 May 17 00:35:49.861377 ignition[1299]: INFO : Stage: umount May 17 00:35:49.861377 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:49.861377 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:35:49.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.840466 systemd[1]: Stopped target cryptsetup.target. May 17 00:35:49.841551 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:35:49.841630 systemd[1]: Stopped dracut-initqueue.service. May 17 00:35:49.842856 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:35:49.842922 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:35:49.844073 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:35:49.844134 systemd[1]: Stopped ignition-files.service. May 17 00:35:49.846334 systemd[1]: Stopping ignition-mount.service... May 17 00:35:49.855747 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:35:49.855855 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:35:49.860559 systemd[1]: Stopping sysroot-boot.service... May 17 00:35:49.862478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:35:49.862579 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:35:49.864528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:35:49.864602 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:35:49.887748 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:35:49.887748 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:35:49.887748 ignition[1299]: INFO : PUT result: OK May 17 00:35:49.889869 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:35:49.893532 ignition[1299]: INFO : umount: umount passed May 17 00:35:49.894226 ignition[1299]: INFO : Ignition finished successfully May 17 00:35:49.896249 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:35:49.896380 systemd[1]: Stopped ignition-mount.service. May 17 00:35:49.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.897700 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:35:49.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.897790 systemd[1]: Stopped ignition-disks.service. May 17 00:35:49.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.899451 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:35:49.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.899522 systemd[1]: Stopped ignition-kargs.service. May 17 00:35:49.900763 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:35:49.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.900828 systemd[1]: Stopped ignition-fetch.service. May 17 00:35:49.901978 systemd[1]: Stopped target network.target. May 17 00:35:49.903102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:35:49.903173 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:35:49.904450 systemd[1]: Stopped target paths.target. May 17 00:35:49.905514 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:35:49.908791 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:35:49.909476 systemd[1]: Stopped target slices.target. May 17 00:35:49.910526 systemd[1]: Stopped target sockets.target. May 17 00:35:49.911793 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:35:49.911838 systemd[1]: Closed iscsid.socket. May 17 00:35:49.913470 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:35:49.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.913513 systemd[1]: Closed iscsiuio.socket. May 17 00:35:49.914565 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:35:49.914637 systemd[1]: Stopped ignition-setup.service. May 17 00:35:49.916802 systemd[1]: Stopping systemd-networkd.service... May 17 00:35:49.917966 systemd[1]: Stopping systemd-resolved.service... May 17 00:35:49.920996 systemd-networkd[1106]: eth0: DHCPv6 lease lost May 17 00:35:49.922306 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:35:49.922447 systemd[1]: Stopped systemd-networkd.service. May 17 00:35:49.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.925310 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:35:49.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.926000 audit: BPF prog-id=9 op=UNLOAD May 17 00:35:49.925483 systemd[1]: Stopped systemd-resolved.service. May 17 00:35:49.927000 audit: BPF prog-id=6 op=UNLOAD May 17 00:35:49.926914 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:35:49.926960 systemd[1]: Closed systemd-networkd.socket. May 17 00:35:49.929278 systemd[1]: Stopping network-cleanup.service... May 17 00:35:49.932468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:35:49.932566 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:35:49.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.933760 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:35:49.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.933840 systemd[1]: Stopped systemd-sysctl.service. May 17 00:35:49.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.935027 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:35:49.935094 systemd[1]: Stopped systemd-modules-load.service. May 17 00:35:49.941914 systemd[1]: Stopping systemd-udevd.service... May 17 00:35:49.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.947305 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:35:49.947623 systemd[1]: Stopped network-cleanup.service. May 17 00:35:49.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.949031 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:35:49.949179 systemd[1]: Stopped systemd-udevd.service. May 17 00:35:49.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.950607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:35:49.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.950645 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:35:49.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.951858 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:35:49.951913 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:35:49.953467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:35:49.953526 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:35:49.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.954380 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:35:49.954442 systemd[1]: Stopped dracut-cmdline.service. May 17 00:35:49.955604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:35:49.955670 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:35:49.957989 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:35:49.962684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:35:49.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:49.962832 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:35:49.969751 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:35:49.969864 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:35:50.032385 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:35:50.032502 systemd[1]: Stopped sysroot-boot.service. May 17 00:35:50.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:50.033917 systemd[1]: Reached target initrd-switch-root.target. May 17 00:35:50.034829 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:35:50.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:50.034894 systemd[1]: Stopped initrd-setup-root.service. May 17 00:35:50.037386 systemd[1]: Starting initrd-switch-root.service... May 17 00:35:50.061531 systemd[1]: Switching root. May 17 00:35:50.064000 audit: BPF prog-id=5 op=UNLOAD May 17 00:35:50.064000 audit: BPF prog-id=4 op=UNLOAD May 17 00:35:50.064000 audit: BPF prog-id=3 op=UNLOAD May 17 00:35:50.064000 audit: BPF prog-id=8 op=UNLOAD May 17 00:35:50.064000 audit: BPF prog-id=7 op=UNLOAD May 17 00:35:50.088735 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). May 17 00:35:50.088840 iscsid[1111]: iscsid shutting down. May 17 00:35:50.090052 systemd-journald[185]: Journal stopped May 17 00:35:55.445671 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:35:55.446385 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:35:55.446418 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:35:55.446447 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:35:55.446465 kernel: SELinux: policy capability open_perms=1 May 17 00:35:55.446482 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:35:55.446499 kernel: SELinux: policy capability always_check_network=0 May 17 00:35:55.446518 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:35:55.446537 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:35:55.446558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:35:55.446579 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:35:55.446603 systemd[1]: Successfully loaded SELinux policy in 86.187ms. May 17 00:35:55.446647 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.696ms. May 17 00:35:55.446668 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:35:55.446687 systemd[1]: Detected virtualization amazon. May 17 00:35:55.446706 systemd[1]: Detected architecture x86-64. May 17 00:35:55.446739 systemd[1]: Detected first boot. May 17 00:35:55.446757 systemd[1]: Initializing machine ID from VM UUID. May 17 00:35:55.446775 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:35:55.446796 systemd[1]: Populated /etc with preset unit settings. May 17 00:35:55.446821 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:35:55.446842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:35:55.446861 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:35:55.446882 systemd[1]: Queued start job for default target multi-user.target. May 17 00:35:55.446900 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 17 00:35:55.446917 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:35:55.453146 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:35:55.453187 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:35:55.453206 systemd[1]: Created slice system-getty.slice. May 17 00:35:55.453223 systemd[1]: Created slice system-modprobe.slice. May 17 00:35:55.456983 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:35:55.457021 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:35:55.457044 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:35:55.457065 systemd[1]: Created slice user.slice. May 17 00:35:55.457087 systemd[1]: Started systemd-ask-password-console.path. May 17 00:35:55.457108 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:35:55.457138 systemd[1]: Set up automount boot.automount. May 17 00:35:55.457156 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:35:55.457171 systemd[1]: Reached target integritysetup.target. May 17 00:35:55.457187 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:35:55.457205 systemd[1]: Reached target remote-fs.target. May 17 00:35:55.457222 systemd[1]: Reached target slices.target. May 17 00:35:55.457240 systemd[1]: Reached target swap.target. May 17 00:35:55.457256 systemd[1]: Reached target torcx.target. May 17 00:35:55.457276 systemd[1]: Reached target veritysetup.target. May 17 00:35:55.457293 systemd[1]: Listening on systemd-coredump.socket. May 17 00:35:55.457309 systemd[1]: Listening on systemd-initctl.socket. May 17 00:35:55.457326 kernel: kauditd_printk_skb: 46 callbacks suppressed May 17 00:35:55.457346 kernel: audit: type=1400 audit(1747442155.161:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:35:55.457365 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:35:55.457386 kernel: audit: type=1335 audit(1747442155.161:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:35:55.457413 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:35:55.457445 systemd[1]: Listening on systemd-journald.socket. May 17 00:35:55.457464 systemd[1]: Listening on systemd-networkd.socket. May 17 00:35:55.457483 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:35:55.457503 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:35:55.457528 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:35:55.457546 systemd[1]: Mounting dev-hugepages.mount... May 17 00:35:55.457567 systemd[1]: Mounting dev-mqueue.mount... May 17 00:35:55.457587 systemd[1]: Mounting media.mount... May 17 00:35:55.457607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:55.457629 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:35:55.457650 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:35:55.457668 systemd[1]: Mounting tmp.mount... May 17 00:35:55.457689 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:35:55.457732 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:35:55.457766 systemd[1]: Starting kmod-static-nodes.service... May 17 00:35:55.457786 systemd[1]: Starting modprobe@configfs.service... May 17 00:35:55.457804 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:35:55.457830 systemd[1]: Starting modprobe@drm.service... May 17 00:35:55.457848 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:35:55.457865 systemd[1]: Starting modprobe@fuse.service... May 17 00:35:55.457883 systemd[1]: Starting modprobe@loop.service... May 17 00:35:55.457900 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:35:55.457930 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:35:55.457950 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:35:55.457968 systemd[1]: Starting systemd-journald.service... May 17 00:35:55.457990 systemd[1]: Starting systemd-modules-load.service... May 17 00:35:55.458011 systemd[1]: Starting systemd-network-generator.service... May 17 00:35:55.458032 systemd[1]: Starting systemd-remount-fs.service... May 17 00:35:55.458054 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:35:55.458076 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:55.458098 systemd[1]: Mounted dev-hugepages.mount. May 17 00:35:55.458123 systemd[1]: Mounted dev-mqueue.mount. May 17 00:35:55.458143 systemd[1]: Mounted media.mount. May 17 00:35:55.458164 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:35:55.458184 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:35:55.458204 systemd[1]: Mounted tmp.mount. May 17 00:35:55.458226 systemd[1]: Finished kmod-static-nodes.service. May 17 00:35:55.458248 kernel: audit: type=1130 audit(1747442155.381:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458271 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:35:55.458292 systemd[1]: Finished modprobe@configfs.service. May 17 00:35:55.458316 kernel: audit: type=1130 audit(1747442155.397:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:35:55.458359 kernel: audit: type=1131 audit(1747442155.397:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458379 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:35:55.458400 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:35:55.458423 kernel: loop: module loaded May 17 00:35:55.458443 systemd[1]: Finished modprobe@drm.service. May 17 00:35:55.458463 kernel: audit: type=1130 audit(1747442155.415:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:35:55.458506 kernel: audit: type=1131 audit(1747442155.415:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458526 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:35:55.458548 kernel: audit: type=1130 audit(1747442155.428:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.458569 systemd[1]: Finished systemd-modules-load.service. May 17 00:35:55.458589 systemd[1]: Finished systemd-network-generator.service. May 17 00:35:55.458618 systemd-journald[1448]: Journal started May 17 00:35:55.458711 systemd-journald[1448]: Runtime Journal (/run/log/journal/ec2edfb936a881fdce0df5668e38922c) is 4.8M, max 38.3M, 33.5M free. May 17 00:35:55.161000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:35:55.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.464746 systemd[1]: Started systemd-journald.service. May 17 00:35:55.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.472799 kernel: audit: type=1131 audit(1747442155.428:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.478897 kernel: fuse: init (API version 7.34) May 17 00:35:55.479021 kernel: audit: type=1305 audit(1747442155.442:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:35:55.442000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:35:55.474306 systemd[1]: Finished systemd-remount-fs.service. May 17 00:35:55.480690 systemd[1]: Reached target network-pre.target. May 17 00:35:55.442000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca45acb20 a2=4000 a3=7ffca45acbbc items=0 ppid=1 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:55.442000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:35:55.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.485792 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:35:55.486978 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:35:55.490026 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:35:55.496122 systemd[1]: Starting systemd-journal-flush.service... May 17 00:35:55.500175 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:35:55.504306 systemd[1]: Starting systemd-random-seed.service... May 17 00:35:55.514248 systemd[1]: Starting systemd-sysctl.service... May 17 00:35:55.519446 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:35:55.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.523005 systemd[1]: Finished modprobe@fuse.service. May 17 00:35:55.525064 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:35:55.525854 systemd-journald[1448]: Time spent on flushing to /var/log/journal/ec2edfb936a881fdce0df5668e38922c is 94.600ms for 1145 entries. May 17 00:35:55.525854 systemd-journald[1448]: System Journal (/var/log/journal/ec2edfb936a881fdce0df5668e38922c) is 8.0M, max 195.6M, 187.6M free. May 17 00:35:55.633793 systemd-journald[1448]: Received client request to flush runtime journal. May 17 00:35:55.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.529907 systemd[1]: Finished modprobe@loop.service. May 17 00:35:55.531328 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:35:55.541164 systemd[1]: Finished systemd-random-seed.service. May 17 00:35:55.542470 systemd[1]: Reached target first-boot-complete.target. May 17 00:35:55.635292 udevadm[1501]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:35:55.545257 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:35:55.546525 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:35:55.557071 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:35:55.583446 systemd[1]: Finished systemd-sysctl.service. May 17 00:35:55.594980 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:35:55.596559 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:35:55.599613 systemd[1]: Starting systemd-sysusers.service... May 17 00:35:55.605479 systemd[1]: Starting systemd-udev-settle.service... May 17 00:35:55.635512 systemd[1]: Finished systemd-journal-flush.service. May 17 00:35:55.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.793193 systemd[1]: Finished systemd-sysusers.service. May 17 00:35:55.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.796105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:35:55.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.916929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:35:56.235374 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:35:56.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.239442 systemd[1]: Starting systemd-udevd.service... May 17 00:35:56.262091 systemd-udevd[1510]: Using default interface naming scheme 'v252'. May 17 00:35:56.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.329604 systemd[1]: Started systemd-udevd.service. May 17 00:35:56.331979 systemd[1]: Starting systemd-networkd.service... May 17 00:35:56.361239 systemd[1]: Starting systemd-userdbd.service... May 17 00:35:56.371738 systemd[1]: Found device dev-ttyS0.device. May 17 00:35:56.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.412505 systemd[1]: Started systemd-userdbd.service. May 17 00:35:56.415217 (udev-worker)[1511]: Network interface NamePolicy= disabled on kernel command line. May 17 00:35:56.424000 audit[1513]: AVC avc: denied { confidentiality } for pid=1513 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:35:56.440738 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:35:56.424000 audit[1513]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56133ddf7090 a1=338ac a2=7fbb62cd3bc5 a3=5 items=110 ppid=1510 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:56.424000 audit: CWD cwd="/" May 17 00:35:56.424000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=1 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=2 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=3 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=4 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=5 name=(null) inode=14133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=6 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=7 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=8 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=9 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=10 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=11 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=12 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=13 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=14 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=15 name=(null) inode=14138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=16 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=17 name=(null) inode=14139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=18 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=19 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=20 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=21 name=(null) inode=14141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=22 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=23 name=(null) inode=14142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=24 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=25 name=(null) inode=14143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=26 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=27 name=(null) inode=14144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=28 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=29 name=(null) inode=14145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=30 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=31 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=32 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=33 name=(null) inode=14147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=34 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=35 name=(null) inode=14148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=36 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=37 name=(null) inode=14149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=38 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=39 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.446497 kernel: ACPI: button: Power Button [PWRF] May 17 00:35:56.446577 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:35:56.424000 audit: PATH item=40 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=41 name=(null) inode=14151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=42 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=43 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=44 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=45 name=(null) inode=14153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=46 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=47 name=(null) inode=14154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=48 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=49 name=(null) inode=14155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=50 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=51 name=(null) inode=14156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=52 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=53 name=(null) inode=14157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=55 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=56 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=57 name=(null) inode=14159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=58 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=59 name=(null) inode=14160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=60 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=61 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=62 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=63 name=(null) inode=14162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=64 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=65 name=(null) inode=14163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=66 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=67 name=(null) inode=14164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=68 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=69 name=(null) inode=14165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=70 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=71 name=(null) inode=14166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=72 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=73 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=74 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=75 name=(null) inode=14168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=76 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=77 name=(null) inode=14169 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=78 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=79 name=(null) inode=14170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=80 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=81 name=(null) inode=14171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=82 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=83 name=(null) inode=14172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=84 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=85 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=86 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=87 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=88 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=89 name=(null) inode=14175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=90 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=91 name=(null) inode=14176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=92 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=93 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=94 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=95 name=(null) inode=14178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=96 name=(null) inode=14158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=97 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=98 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.450738 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:35:56.424000 audit: PATH item=99 name=(null) inode=14180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=100 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=101 name=(null) inode=14181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=102 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=103 name=(null) inode=14182 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=104 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=105 name=(null) inode=14183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=106 name=(null) inode=14179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=107 name=(null) inode=14184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PATH item=109 name=(null) inode=14185 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:56.424000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:35:56.487770 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:35:56.494819 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:35:56.536052 systemd-networkd[1517]: lo: Link UP May 17 00:35:56.542733 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:35:56.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.536464 systemd-networkd[1517]: lo: Gained carrier May 17 00:35:56.537163 systemd-networkd[1517]: Enumeration completed May 17 00:35:56.537344 systemd[1]: Started systemd-networkd.service. May 17 00:35:56.541882 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:35:56.545346 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:35:56.551760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:56.552327 systemd-networkd[1517]: eth0: Link UP May 17 00:35:56.552519 systemd-networkd[1517]: eth0: Gained carrier May 17 00:35:56.564699 systemd-networkd[1517]: eth0: DHCPv4 address 172.31.26.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:35:56.681744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:35:56.683381 systemd[1]: Finished systemd-udev-settle.service. May 17 00:35:56.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.685791 systemd[1]: Starting lvm2-activation-early.service... May 17 00:35:56.762560 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:35:56.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.793264 systemd[1]: Finished lvm2-activation-early.service. May 17 00:35:56.794365 systemd[1]: Reached target cryptsetup.target. May 17 00:35:56.797814 systemd[1]: Starting lvm2-activation.service... May 17 00:35:56.803639 lvm[1627]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:35:56.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:56.836743 systemd[1]: Finished lvm2-activation.service. May 17 00:35:56.837882 systemd[1]: Reached target local-fs-pre.target. May 17 00:35:56.838750 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:35:56.838787 systemd[1]: Reached target local-fs.target. May 17 00:35:56.839671 systemd[1]: Reached target machines.target. May 17 00:35:56.842914 systemd[1]: Starting ldconfig.service... May 17 00:35:56.845501 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:35:56.845633 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:56.847577 systemd[1]: Starting systemd-boot-update.service... May 17 00:35:56.850354 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:35:56.853302 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:35:56.857465 systemd[1]: Starting systemd-sysext.service... May 17 00:35:56.871849 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1630 (bootctl) May 17 00:35:56.873884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:35:56.881532 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:35:56.890177 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:35:56.890592 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:35:56.908750 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:35:56.916272 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:35:56.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.036874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:35:57.040995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:35:57.041738 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:35:57.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.056744 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:35:57.066595 systemd-fsck[1643]: fsck.fat 4.2 (2021-01-31) May 17 00:35:57.066595 systemd-fsck[1643]: /dev/nvme0n1p1: 790 files, 120726/258078 clusters May 17 00:35:57.069048 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:35:57.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.071081 systemd[1]: Mounting boot.mount... May 17 00:35:57.081004 (sd-sysext)[1647]: Using extensions 'kubernetes'. May 17 00:35:57.082032 (sd-sysext)[1647]: Merged extensions into '/usr'. May 17 00:35:57.109457 systemd[1]: Mounted boot.mount. May 17 00:35:57.121411 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:57.123971 systemd[1]: Mounting usr-share-oem.mount... May 17 00:35:57.126350 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:35:57.128427 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:35:57.135662 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:35:57.139627 systemd[1]: Starting modprobe@loop.service... May 17 00:35:57.140670 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:35:57.140934 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:57.141130 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:57.149022 systemd[1]: Mounted usr-share-oem.mount. May 17 00:35:57.152449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:35:57.152700 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:35:57.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.154191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:35:57.154423 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:35:57.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.156072 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:35:57.156363 systemd[1]: Finished modprobe@loop.service. May 17 00:35:57.169410 systemd[1]: Finished systemd-boot-update.service. May 17 00:35:57.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.172441 systemd[1]: Finished systemd-sysext.service. May 17 00:35:57.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.182006 systemd[1]: Starting ensure-sysext.service... May 17 00:35:57.183055 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:35:57.183252 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:35:57.185843 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:35:57.197881 systemd[1]: Reloading. May 17 00:35:57.215841 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:35:57.217140 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:35:57.220208 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:35:57.291214 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-05-17T00:35:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:35:57.295803 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-05-17T00:35:57Z" level=info msg="torcx already run" May 17 00:35:57.465234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:35:57.465467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:35:57.496094 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:35:57.578483 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:35:57.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.586398 systemd[1]: Starting audit-rules.service... May 17 00:35:57.589151 systemd[1]: Starting clean-ca-certificates.service... May 17 00:35:57.592178 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:35:57.598674 systemd[1]: Starting systemd-resolved.service... May 17 00:35:57.606014 systemd[1]: Starting systemd-timesyncd.service... May 17 00:35:57.609628 systemd[1]: Starting systemd-update-utmp.service... May 17 00:35:57.616642 systemd[1]: Finished clean-ca-certificates.service. May 17 00:35:57.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.624000 audit[1767]: SYSTEM_BOOT pid=1767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:35:57.632831 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:35:57.635364 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:35:57.638351 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:35:57.641135 systemd[1]: Starting modprobe@loop.service... May 17 00:35:57.645982 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:35:57.646347 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:57.646634 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:35:57.651289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:35:57.651559 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:35:57.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.653251 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:35:57.656278 systemd[1]: Finished systemd-update-utmp.service. May 17 00:35:57.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.657688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:35:57.657946 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:35:57.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.664507 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:35:57.666487 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:35:57.670942 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:35:57.673940 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:35:57.674183 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:57.674383 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:35:57.678397 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:35:57.678678 systemd[1]: Finished modprobe@loop.service. May 17 00:35:57.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.680527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:35:57.680768 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:35:57.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.687626 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:35:57.688748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:35:57.688990 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:35:57.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.698130 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:35:57.701375 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:35:57.707272 systemd[1]: Starting modprobe@drm.service... May 17 00:35:57.710152 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:35:57.720518 systemd[1]: Starting modprobe@loop.service... May 17 00:35:57.721383 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:35:57.721589 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:57.721852 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:35:57.726596 systemd[1]: Finished ensure-sysext.service. May 17 00:35:57.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.727595 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:35:57.727926 systemd[1]: Finished modprobe@loop.service. May 17 00:35:57.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.752405 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:35:57.752646 systemd[1]: Finished modprobe@drm.service. May 17 00:35:57.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.754984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:35:57.755231 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:35:57.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.756040 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:35:57.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.765222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:35:57.765472 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:35:57.766316 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:35:57.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.781408 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:35:57.840000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:35:57.840000 audit[1804]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe844358a0 a2=420 a3=0 items=0 ppid=1760 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:57.840000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:35:57.842042 augenrules[1804]: No rules May 17 00:35:57.842759 systemd[1]: Finished audit-rules.service. May 17 00:35:57.871516 systemd-resolved[1763]: Positive Trust Anchors: May 17 00:35:57.872015 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:35:57.872141 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:35:57.874995 systemd[1]: Started systemd-timesyncd.service. May 17 00:35:57.875798 systemd[1]: Reached target time-set.target. May 17 00:35:57.890459 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:57.890488 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:35:57.908957 systemd-resolved[1763]: Defaulting to hostname 'linux'. May 17 00:35:57.911527 systemd[1]: Started systemd-resolved.service. May 17 00:35:57.912084 systemd[1]: Reached target network.target. May 17 00:35:57.912484 systemd[1]: Reached target nss-lookup.target. May 17 00:35:58.433328 systemd-resolved[1763]: Clock change detected. Flushing caches. May 17 00:35:58.433489 systemd-timesyncd[1764]: Contacted time server 208.67.72.50:123 (0.flatcar.pool.ntp.org). May 17 00:35:58.433675 systemd-timesyncd[1764]: Initial clock synchronization to Sat 2025-05-17 00:35:58.433226 UTC. May 17 00:35:58.495126 ldconfig[1629]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:35:58.505284 systemd[1]: Finished ldconfig.service. May 17 00:35:58.508186 systemd[1]: Starting systemd-update-done.service... May 17 00:35:58.518192 systemd[1]: Finished systemd-update-done.service. May 17 00:35:58.518965 systemd[1]: Reached target sysinit.target. May 17 00:35:58.519684 systemd[1]: Started motdgen.path. May 17 00:35:58.520298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:35:58.521114 systemd[1]: Started logrotate.timer. May 17 00:35:58.521769 systemd[1]: Started mdadm.timer. May 17 00:35:58.522322 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:35:58.522878 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:35:58.522927 systemd[1]: Reached target paths.target. May 17 00:35:58.523428 systemd[1]: Reached target timers.target. May 17 00:35:58.524397 systemd[1]: Listening on dbus.socket. May 17 00:35:58.526869 systemd[1]: Starting docker.socket... May 17 00:35:58.530226 systemd[1]: Listening on sshd.socket. May 17 00:35:58.531148 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:58.532073 systemd[1]: Listening on docker.socket. May 17 00:35:58.532917 systemd[1]: Reached target sockets.target. May 17 00:35:58.533552 systemd[1]: Reached target basic.target. May 17 00:35:58.534317 systemd[1]: System is tainted: cgroupsv1 May 17 00:35:58.534381 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:35:58.534415 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:35:58.535790 systemd[1]: Starting containerd.service... May 17 00:35:58.537791 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:35:58.540304 systemd[1]: Starting dbus.service... May 17 00:35:58.542617 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:35:58.545151 systemd[1]: Starting extend-filesystems.service... May 17 00:35:58.545804 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:35:58.547713 systemd[1]: Starting motdgen.service... May 17 00:35:58.559443 systemd[1]: Starting prepare-helm.service... May 17 00:35:58.564996 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:35:58.568504 systemd[1]: Starting sshd-keygen.service... May 17 00:35:58.595955 jq[1820]: false May 17 00:35:58.578019 systemd[1]: Starting systemd-logind.service... May 17 00:35:58.578901 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:35:58.579004 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:35:58.581558 systemd[1]: Starting update-engine.service... May 17 00:35:58.591398 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:35:58.597486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:35:58.599914 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:35:58.602203 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:35:58.602546 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:35:58.632558 jq[1833]: true May 17 00:35:58.663046 jq[1848]: true May 17 00:35:58.664304 tar[1838]: linux-amd64/helm May 17 00:35:58.672013 extend-filesystems[1821]: Found loop1 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p1 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p2 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p3 May 17 00:35:58.672013 extend-filesystems[1821]: Found usr May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p4 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p6 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p7 May 17 00:35:58.672013 extend-filesystems[1821]: Found nvme0n1p9 May 17 00:35:58.672013 extend-filesystems[1821]: Checking size of /dev/nvme0n1p9 May 17 00:35:58.680792 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:35:58.703667 dbus-daemon[1819]: [system] SELinux support is enabled May 17 00:35:58.681133 systemd[1]: Finished motdgen.service. May 17 00:35:58.703968 systemd[1]: Started dbus.service. May 17 00:35:58.716768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:35:58.716805 systemd[1]: Reached target system-config.target. May 17 00:35:58.717898 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:35:58.717933 systemd[1]: Reached target user-config.target. May 17 00:35:58.721928 dbus-daemon[1819]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1517 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:35:58.728041 dbus-daemon[1819]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:35:58.735299 systemd[1]: Starting systemd-hostnamed.service... May 17 00:35:58.752826 extend-filesystems[1821]: Resized partition /dev/nvme0n1p9 May 17 00:35:58.768402 extend-filesystems[1870]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:35:58.777856 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:35:58.828939 env[1842]: time="2025-05-17T00:35:58.828865737Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:35:58.867065 systemd-networkd[1517]: eth0: Gained IPv6LL May 17 00:35:58.877274 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:35:58.929320 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:35:58.878256 systemd[1]: Reached target network-online.target. May 17 00:35:58.881609 systemd[1]: Started amazon-ssm-agent.service. May 17 00:35:58.930806 update_engine[1832]: I0517 00:35:58.928424 1832 main.cc:92] Flatcar Update Engine starting May 17 00:35:58.886120 systemd[1]: Starting kubelet.service... May 17 00:35:58.936260 extend-filesystems[1870]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:35:58.936260 extend-filesystems[1870]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:35:58.936260 extend-filesystems[1870]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:35:58.892623 systemd[1]: Started nvidia.service. May 17 00:35:59.051801 extend-filesystems[1821]: Resized filesystem in /dev/nvme0n1p9 May 17 00:35:59.052736 bash[1882]: Updated "/home/core/.ssh/authorized_keys" May 17 00:35:59.052869 update_engine[1832]: I0517 00:35:58.970253 1832 update_check_scheduler.cc:74] Next update check in 3m45s May 17 00:35:58.931911 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:35:58.932264 systemd[1]: Finished extend-filesystems.service. May 17 00:35:58.957749 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:35:58.959278 systemd[1]: Started update-engine.service. May 17 00:35:58.962791 systemd[1]: Started locksmithd.service. May 17 00:35:59.154972 systemd-logind[1830]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:35:59.155001 systemd-logind[1830]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:35:59.155028 systemd-logind[1830]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:35:59.163736 systemd-logind[1830]: New seat seat0. May 17 00:35:59.183420 systemd[1]: Started systemd-logind.service. May 17 00:35:59.215177 amazon-ssm-agent[1891]: 2025/05/17 00:35:59 Failed to load instance info from vault. RegistrationKey does not exist. May 17 00:35:59.217641 amazon-ssm-agent[1891]: Initializing new seelog logger May 17 00:35:59.222594 amazon-ssm-agent[1891]: New Seelog Logger Creation Complete May 17 00:35:59.224997 amazon-ssm-agent[1891]: 2025/05/17 00:35:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:35:59.225144 amazon-ssm-agent[1891]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:35:59.225476 amazon-ssm-agent[1891]: 2025/05/17 00:35:59 processing appconfig overrides May 17 00:35:59.268192 env[1842]: time="2025-05-17T00:35:59.267961636Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:35:59.268192 env[1842]: time="2025-05-17T00:35:59.268181181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.270850959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.270897608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.271232311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.271256334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.271275958Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:35:59.271370 env[1842]: time="2025-05-17T00:35:59.271290493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.271709 env[1842]: time="2025-05-17T00:35:59.271384640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.271709 env[1842]: time="2025-05-17T00:35:59.271674284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:35:59.272001 env[1842]: time="2025-05-17T00:35:59.271966093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:35:59.272098 env[1842]: time="2025-05-17T00:35:59.272001467Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:35:59.272098 env[1842]: time="2025-05-17T00:35:59.272073249Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:35:59.272098 env[1842]: time="2025-05-17T00:35:59.272091803Z" level=info msg="metadata content store policy set" policy=shared May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287407266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287474379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287493246Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287540530Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287565746Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287591255Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287612305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287632183Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287650380Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287677498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287695868Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287716129Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.287905526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:35:59.289863 env[1842]: time="2025-05-17T00:35:59.288012888Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288510855Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288555042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288579801Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288636703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288658907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288678357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288695623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288715943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288734811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288752228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288770162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288793639Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288969751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.288992124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:35:59.290482 env[1842]: time="2025-05-17T00:35:59.289012248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:35:59.291153 env[1842]: time="2025-05-17T00:35:59.289029081Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:35:59.291153 env[1842]: time="2025-05-17T00:35:59.289052997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:35:59.291153 env[1842]: time="2025-05-17T00:35:59.289069872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:35:59.291153 env[1842]: time="2025-05-17T00:35:59.289100804Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:35:59.291153 env[1842]: time="2025-05-17T00:35:59.289150357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:35:59.291336 env[1842]: time="2025-05-17T00:35:59.289427605Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:35:59.291336 env[1842]: time="2025-05-17T00:35:59.289507425Z" level=info msg="Connect containerd service" May 17 00:35:59.291336 env[1842]: time="2025-05-17T00:35:59.289555268Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:35:59.297197 env[1842]: time="2025-05-17T00:35:59.297147404Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:35:59.300438 env[1842]: time="2025-05-17T00:35:59.300373951Z" level=info msg="Start subscribing containerd event" May 17 00:35:59.300652 env[1842]: time="2025-05-17T00:35:59.300635284Z" level=info msg="Start recovering state" May 17 00:35:59.300981 env[1842]: time="2025-05-17T00:35:59.300940699Z" level=info msg="Start event monitor" May 17 00:35:59.308009 env[1842]: time="2025-05-17T00:35:59.307898510Z" level=info msg="Start snapshots syncer" May 17 00:35:59.308184 env[1842]: time="2025-05-17T00:35:59.303433712Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:35:59.308356 env[1842]: time="2025-05-17T00:35:59.308334974Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:35:59.308658 systemd[1]: Started containerd.service. May 17 00:35:59.310660 env[1842]: time="2025-05-17T00:35:59.310631040Z" level=info msg="containerd successfully booted in 0.491015s" May 17 00:35:59.310765 env[1842]: time="2025-05-17T00:35:59.310749813Z" level=info msg="Start cni network conf syncer for default" May 17 00:35:59.310855 env[1842]: time="2025-05-17T00:35:59.310823217Z" level=info msg="Start streaming server" May 17 00:35:59.377575 dbus-daemon[1819]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:35:59.377766 systemd[1]: Started systemd-hostnamed.service. May 17 00:35:59.379681 dbus-daemon[1819]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1864 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:35:59.384522 systemd[1]: Starting polkit.service... May 17 00:35:59.394479 coreos-metadata[1817]: May 17 00:35:59.392 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:35:59.404462 coreos-metadata[1817]: May 17 00:35:59.404 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 May 17 00:35:59.405793 coreos-metadata[1817]: May 17 00:35:59.405 INFO Fetch successful May 17 00:35:59.405964 coreos-metadata[1817]: May 17 00:35:59.405 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:35:59.408874 coreos-metadata[1817]: May 17 00:35:59.408 INFO Fetch successful May 17 00:35:59.412669 unknown[1817]: wrote ssh authorized keys file for user: core May 17 00:35:59.414794 polkitd[1963]: Started polkitd version 121 May 17 00:35:59.439526 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:35:59.443358 polkitd[1963]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:35:59.443448 polkitd[1963]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:35:59.452416 polkitd[1963]: Finished loading, compiling and executing 2 rules May 17 00:35:59.453101 dbus-daemon[1819]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:35:59.453305 systemd[1]: Started polkit.service. May 17 00:35:59.455981 polkitd[1963]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:35:59.462841 update-ssh-keys[1968]: Updated "/home/core/.ssh/authorized_keys" May 17 00:35:59.463243 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:35:59.497953 systemd-hostnamed[1864]: Hostname set to (transient) May 17 00:35:59.497954 systemd-resolved[1763]: System hostname changed to 'ip-172-31-26-143'. May 17 00:35:59.862415 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Create new startup processor May 17 00:35:59.862648 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [LongRunningPluginsManager] registered plugins: {} May 17 00:35:59.862719 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing bookkeeping folders May 17 00:35:59.862719 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO removing the completed state files May 17 00:35:59.862719 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing bookkeeping folders for long running plugins May 17 00:35:59.862719 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing replies folder for MDS reply requests that couldn't reach the service May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing healthcheck folders for long running plugins May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing locations for inventory plugin May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing default location for custom inventory May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing default location for file inventory May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Initializing default location for role inventory May 17 00:35:59.862879 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Init the cloudwatchlogs publisher May 17 00:35:59.865518 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:runDockerAction May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:downloadContent May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:softwareInventory May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:runPowerShellScript May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:updateSsmAgent May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:runDocument May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:configureDocker May 17 00:35:59.865633 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:refreshAssociation May 17 00:35:59.865899 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform independent plugin aws:configurePackage May 17 00:35:59.865899 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Successfully loaded platform dependent plugin aws:runShellScript May 17 00:35:59.865899 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 May 17 00:35:59.865899 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO OS: linux, Arch: amd64 May 17 00:35:59.883340 amazon-ssm-agent[1891]: datastore file /var/lib/amazon/ssm/i-0e4b4d14950b89662/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute May 17 00:35:59.958880 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] Starting document processing engine... May 17 00:36:00.054231 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [EngineProcessor] Starting May 17 00:36:00.148623 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing May 17 00:36:00.150372 tar[1838]: linux-amd64/LICENSE May 17 00:36:00.150952 tar[1838]: linux-amd64/README.md May 17 00:36:00.165289 systemd[1]: Finished prepare-helm.service. May 17 00:36:00.243105 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] Starting message polling May 17 00:36:00.316741 sshd_keygen[1852]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:36:00.337818 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] Starting send replies to MDS May 17 00:36:00.357689 systemd[1]: Finished sshd-keygen.service. May 17 00:36:00.360892 systemd[1]: Starting issuegen.service... May 17 00:36:00.373041 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:36:00.373382 systemd[1]: Finished issuegen.service. May 17 00:36:00.376630 systemd[1]: Starting systemd-user-sessions.service... May 17 00:36:00.389816 systemd[1]: Finished systemd-user-sessions.service. May 17 00:36:00.393700 systemd[1]: Started getty@tty1.service. May 17 00:36:00.399328 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:36:00.402880 systemd[1]: Reached target getty.target. May 17 00:36:00.415155 locksmithd[1901]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:36:00.432874 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [instanceID=i-0e4b4d14950b89662] Starting association polling May 17 00:36:00.528003 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting May 17 00:36:00.625371 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [Association] Launching response handler May 17 00:36:00.720970 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing May 17 00:36:00.816878 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service May 17 00:36:00.912766 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized May 17 00:36:01.011331 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] Starting session document processing engine... May 17 00:36:01.105708 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] [EngineProcessor] Starting May 17 00:36:01.202292 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. May 17 00:36:01.299170 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0e4b4d14950b89662, requestId: a0bfd1f4-09b7-42ff-ac27-414660ecfb60 May 17 00:36:01.396154 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] listening reply. May 17 00:36:01.493801 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [OfflineService] Starting document processing engine... May 17 00:36:01.594407 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [OfflineService] [EngineProcessor] Starting May 17 00:36:01.692284 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [OfflineService] [EngineProcessor] Initial processing May 17 00:36:01.789937 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [OfflineService] Starting message polling May 17 00:36:01.888129 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [OfflineService] Starting send replies to MDS May 17 00:36:01.988874 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [LongRunningPluginsManager] starting long running plugin manager May 17 00:36:02.016299 systemd[1]: Started kubelet.service. May 17 00:36:02.021681 systemd[1]: Reached target multi-user.target. May 17 00:36:02.032201 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:36:02.088869 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:36:02.090428 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:36:02.107199 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute May 17 00:36:02.111209 systemd[1]: Startup finished in 7.586s (kernel) + 10.960s (userspace) = 18.547s. May 17 00:36:02.204700 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [HealthCheck] HealthCheck reporting agent health. May 17 00:36:02.303497 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck May 17 00:36:02.402412 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [StartupProcessor] Executing startup processor tasks May 17 00:36:02.502443 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running May 17 00:36:02.601710 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk May 17 00:36:02.701876 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 May 17 00:36:02.801575 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0e4b4d14950b89662?role=subscribe&stream=input May 17 00:36:02.906288 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0e4b4d14950b89662?role=subscribe&stream=input May 17 00:36:03.006920 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] Starting receiving message from control channel May 17 00:36:03.107224 amazon-ssm-agent[1891]: 2025-05-17 00:35:59 INFO [MessageGatewayService] [EngineProcessor] Initial processing May 17 00:36:03.916545 kubelet[2053]: E0517 00:36:03.916368 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:03.919136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:03.919376 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:07.716213 systemd[1]: Created slice system-sshd.slice. May 17 00:36:07.717644 systemd[1]: Started sshd@0-172.31.26.143:22-139.178.68.195:51982.service. May 17 00:36:07.925414 sshd[2063]: Accepted publickey for core from 139.178.68.195 port 51982 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:07.930135 sshd[2063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:07.946814 systemd[1]: Created slice user-500.slice. May 17 00:36:07.949079 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:36:07.952203 systemd-logind[1830]: New session 1 of user core. May 17 00:36:07.963371 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:36:07.965325 systemd[1]: Starting user@500.service... May 17 00:36:07.974188 (systemd)[2068]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:08.082373 systemd[2068]: Queued start job for default target default.target. May 17 00:36:08.083261 systemd[2068]: Reached target paths.target. May 17 00:36:08.083291 systemd[2068]: Reached target sockets.target. May 17 00:36:08.083305 systemd[2068]: Reached target timers.target. May 17 00:36:08.083317 systemd[2068]: Reached target basic.target. May 17 00:36:08.083464 systemd[1]: Started user@500.service. May 17 00:36:08.085716 systemd[1]: Started session-1.scope. May 17 00:36:08.093169 systemd[2068]: Reached target default.target. May 17 00:36:08.094329 systemd[2068]: Startup finished in 112ms. May 17 00:36:08.227658 systemd[1]: Started sshd@1-172.31.26.143:22-139.178.68.195:51996.service. May 17 00:36:08.404683 sshd[2077]: Accepted publickey for core from 139.178.68.195 port 51996 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:08.410443 sshd[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:08.424158 systemd-logind[1830]: New session 2 of user core. May 17 00:36:08.426369 systemd[1]: Started session-2.scope. May 17 00:36:08.558608 sshd[2077]: pam_unix(sshd:session): session closed for user core May 17 00:36:08.561710 systemd[1]: sshd@1-172.31.26.143:22-139.178.68.195:51996.service: Deactivated successfully. May 17 00:36:08.562642 systemd-logind[1830]: Session 2 logged out. Waiting for processes to exit. May 17 00:36:08.562720 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:36:08.563778 systemd-logind[1830]: Removed session 2. May 17 00:36:08.583776 systemd[1]: Started sshd@2-172.31.26.143:22-139.178.68.195:51998.service. May 17 00:36:08.748232 sshd[2084]: Accepted publickey for core from 139.178.68.195 port 51998 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:08.750047 sshd[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:08.755523 systemd-logind[1830]: New session 3 of user core. May 17 00:36:08.756133 systemd[1]: Started session-3.scope. May 17 00:36:08.881156 sshd[2084]: pam_unix(sshd:session): session closed for user core May 17 00:36:08.884622 systemd[1]: sshd@2-172.31.26.143:22-139.178.68.195:51998.service: Deactivated successfully. May 17 00:36:08.885554 systemd-logind[1830]: Session 3 logged out. Waiting for processes to exit. May 17 00:36:08.885610 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:36:08.886641 systemd-logind[1830]: Removed session 3. May 17 00:36:08.903885 systemd[1]: Started sshd@3-172.31.26.143:22-139.178.68.195:52002.service. May 17 00:36:09.066527 sshd[2091]: Accepted publickey for core from 139.178.68.195 port 52002 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:09.068147 sshd[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:09.074101 systemd[1]: Started session-4.scope. May 17 00:36:09.074379 systemd-logind[1830]: New session 4 of user core. May 17 00:36:09.202260 sshd[2091]: pam_unix(sshd:session): session closed for user core May 17 00:36:09.205557 systemd[1]: sshd@3-172.31.26.143:22-139.178.68.195:52002.service: Deactivated successfully. May 17 00:36:09.206980 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:36:09.206994 systemd-logind[1830]: Session 4 logged out. Waiting for processes to exit. May 17 00:36:09.208577 systemd-logind[1830]: Removed session 4. May 17 00:36:09.227219 systemd[1]: Started sshd@4-172.31.26.143:22-139.178.68.195:52012.service. May 17 00:36:09.392158 sshd[2098]: Accepted publickey for core from 139.178.68.195 port 52012 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:09.393980 sshd[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:09.400204 systemd[1]: Started session-5.scope. May 17 00:36:09.400598 systemd-logind[1830]: New session 5 of user core. May 17 00:36:09.541224 sudo[2102]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:36:09.541899 sudo[2102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:36:09.552096 dbus-daemon[1819]: Э\u000b\xea\xe6U: received setenforce notice (enforcing=1971840848) May 17 00:36:09.554118 sudo[2102]: pam_unix(sudo:session): session closed for user root May 17 00:36:09.579073 sshd[2098]: pam_unix(sshd:session): session closed for user core May 17 00:36:09.584142 systemd[1]: sshd@4-172.31.26.143:22-139.178.68.195:52012.service: Deactivated successfully. May 17 00:36:09.585406 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:36:09.585413 systemd-logind[1830]: Session 5 logged out. Waiting for processes to exit. May 17 00:36:09.586548 systemd-logind[1830]: Removed session 5. May 17 00:36:09.602157 systemd[1]: Started sshd@5-172.31.26.143:22-139.178.68.195:52024.service. May 17 00:36:09.763270 sshd[2106]: Accepted publickey for core from 139.178.68.195 port 52024 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:09.765248 sshd[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:09.771139 systemd[1]: Started session-6.scope. May 17 00:36:09.771826 systemd-logind[1830]: New session 6 of user core. May 17 00:36:09.875202 sudo[2111]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:36:09.875449 sudo[2111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:36:09.879517 sudo[2111]: pam_unix(sudo:session): session closed for user root May 17 00:36:09.885607 sudo[2110]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:36:09.885953 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:36:09.897241 systemd[1]: Stopping audit-rules.service... May 17 00:36:09.898000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:36:09.900129 kernel: kauditd_printk_skb: 178 callbacks suppressed May 17 00:36:09.900201 kernel: audit: type=1305 audit(1747442169.898:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:36:09.900233 auditctl[2114]: No rules May 17 00:36:09.898000 audit[2114]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ea1ae50 a2=420 a3=0 items=0 ppid=1 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:09.900919 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:36:09.901221 systemd[1]: Stopped audit-rules.service. May 17 00:36:09.904634 systemd[1]: Starting audit-rules.service... May 17 00:36:09.907080 kernel: audit: type=1300 audit(1747442169.898:157): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ea1ae50 a2=420 a3=0 items=0 ppid=1 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:09.907145 kernel: audit: type=1327 audit(1747442169.898:157): proctitle=2F7362696E2F617564697463746C002D44 May 17 00:36:09.898000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:36:09.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.917871 kernel: audit: type=1131 audit(1747442169.900:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.934189 augenrules[2132]: No rules May 17 00:36:09.935166 systemd[1]: Finished audit-rules.service. May 17 00:36:09.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.936615 sudo[2110]: pam_unix(sudo:session): session closed for user root May 17 00:36:09.943657 kernel: audit: type=1130 audit(1747442169.934:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.943774 kernel: audit: type=1106 audit(1747442169.935:160): pid=2110 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:09.935000 audit[2110]: USER_END pid=2110 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:09.935000 audit[2110]: CRED_DISP pid=2110 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:09.948034 kernel: audit: type=1104 audit(1747442169.935:161): pid=2110 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:09.961192 sshd[2106]: pam_unix(sshd:session): session closed for user core May 17 00:36:09.962000 audit[2106]: USER_END pid=2106 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:09.965244 systemd[1]: sshd@5-172.31.26.143:22-139.178.68.195:52024.service: Deactivated successfully. May 17 00:36:09.966296 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:36:09.968077 systemd-logind[1830]: Session 6 logged out. Waiting for processes to exit. May 17 00:36:09.968966 kernel: audit: type=1106 audit(1747442169.962:162): pid=2106 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:09.969706 systemd-logind[1830]: Removed session 6. May 17 00:36:09.962000 audit[2106]: CRED_DISP pid=2106 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:09.974851 kernel: audit: type=1104 audit(1747442169.962:163): pid=2106 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:09.974928 kernel: audit: type=1131 audit(1747442169.962:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.143:22-139.178.68.195:52024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.143:22-139.178.68.195:52024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.143:22-139.178.68.195:52040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.985507 systemd[1]: Started sshd@6-172.31.26.143:22-139.178.68.195:52040.service. May 17 00:36:10.148000 audit[2139]: USER_ACCT pid=2139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:10.149434 sshd[2139]: Accepted publickey for core from 139.178.68.195 port 52040 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:36:10.150000 audit[2139]: CRED_ACQ pid=2139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:10.150000 audit[2139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9ede98d0 a2=3 a3=0 items=0 ppid=1 pid=2139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.150000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:10.151424 sshd[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:10.157685 systemd[1]: Started session-7.scope. May 17 00:36:10.158227 systemd-logind[1830]: New session 7 of user core. May 17 00:36:10.163000 audit[2139]: USER_START pid=2139 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:10.165000 audit[2142]: CRED_ACQ pid=2142 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:10.260000 audit[2143]: USER_ACCT pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:10.261889 sudo[2143]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:36:10.261000 audit[2143]: CRED_REFR pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:10.262145 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:36:10.263000 audit[2143]: USER_START pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:10.291060 systemd[1]: Starting docker.service... May 17 00:36:10.332487 env[2154]: time="2025-05-17T00:36:10.332429368Z" level=info msg="Starting up" May 17 00:36:10.334737 env[2154]: time="2025-05-17T00:36:10.334693299Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:36:10.334737 env[2154]: time="2025-05-17T00:36:10.334720918Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:36:10.334930 env[2154]: time="2025-05-17T00:36:10.334748005Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:36:10.334930 env[2154]: time="2025-05-17T00:36:10.334764078Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:36:10.337542 env[2154]: time="2025-05-17T00:36:10.337504026Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:36:10.337542 env[2154]: time="2025-05-17T00:36:10.337528238Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:36:10.337702 env[2154]: time="2025-05-17T00:36:10.337551107Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:36:10.337702 env[2154]: time="2025-05-17T00:36:10.337562541Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:36:10.379233 env[2154]: time="2025-05-17T00:36:10.379195825Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:36:10.379455 env[2154]: time="2025-05-17T00:36:10.379439097Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:36:10.379739 env[2154]: time="2025-05-17T00:36:10.379719695Z" level=info msg="Loading containers: start." May 17 00:36:10.512000 audit[2184]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2184 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.512000 audit[2184]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffed2635bc0 a2=0 a3=7ffed2635bac items=0 ppid=2154 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.512000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:36:10.515000 audit[2186]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2186 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.515000 audit[2186]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe9c9cb310 a2=0 a3=7ffe9c9cb2fc items=0 ppid=2154 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.515000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:36:10.517000 audit[2188]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2188 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.517000 audit[2188]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc07b37f00 a2=0 a3=7ffc07b37eec items=0 ppid=2154 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.517000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:36:10.520000 audit[2190]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2190 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.520000 audit[2190]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffeefed340 a2=0 a3=7fffeefed32c items=0 ppid=2154 pid=2190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.520000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:36:10.523000 audit[2192]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2192 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.523000 audit[2192]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff65677ac0 a2=0 a3=7fff65677aac items=0 ppid=2154 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:36:10.539000 audit[2197]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2197 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.539000 audit[2197]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff7253cce0 a2=0 a3=7fff7253cccc items=0 ppid=2154 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.539000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:36:10.550000 audit[2199]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2199 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.550000 audit[2199]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffceeab670 a2=0 a3=7fffceeab65c items=0 ppid=2154 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.550000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:36:10.553000 audit[2201]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2201 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.553000 audit[2201]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcaf6a0680 a2=0 a3=7ffcaf6a066c items=0 ppid=2154 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.553000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:36:10.555000 audit[2203]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2203 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.555000 audit[2203]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe2f770070 a2=0 a3=7ffe2f77005c items=0 ppid=2154 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.555000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:36:10.566000 audit[2207]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2207 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.566000 audit[2207]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffea8e409a0 a2=0 a3=7ffea8e4098c items=0 ppid=2154 pid=2207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.566000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:36:10.572000 audit[2208]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.572000 audit[2208]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe5ef56430 a2=0 a3=7ffe5ef5641c items=0 ppid=2154 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:36:10.595858 kernel: Initializing XFRM netlink socket May 17 00:36:10.672433 env[2154]: time="2025-05-17T00:36:10.672391893Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:36:10.674435 (udev-worker)[2164]: Network interface NamePolicy= disabled on kernel command line. May 17 00:36:10.699000 audit[2216]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.699000 audit[2216]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff16e57d60 a2=0 a3=7fff16e57d4c items=0 ppid=2154 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.699000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:36:10.714000 audit[2219]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2219 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.714000 audit[2219]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc2a80aa70 a2=0 a3=7ffc2a80aa5c items=0 ppid=2154 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.714000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:36:10.718000 audit[2222]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.718000 audit[2222]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc4b5e6940 a2=0 a3=7ffc4b5e692c items=0 ppid=2154 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.718000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:36:10.720000 audit[2224]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.720000 audit[2224]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd24695480 a2=0 a3=7ffd2469546c items=0 ppid=2154 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.720000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:36:10.723000 audit[2226]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.723000 audit[2226]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc404dca50 a2=0 a3=7ffc404dca3c items=0 ppid=2154 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.723000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:36:10.725000 audit[2228]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.725000 audit[2228]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdf2defe00 a2=0 a3=7ffdf2defdec items=0 ppid=2154 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.725000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:36:10.727000 audit[2230]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.727000 audit[2230]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc8a9f2d70 a2=0 a3=7ffc8a9f2d5c items=0 ppid=2154 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.727000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:36:10.758000 audit[2233]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2233 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.758000 audit[2233]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc31f37d30 a2=0 a3=7ffc31f37d1c items=0 ppid=2154 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:36:10.761000 audit[2235]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.761000 audit[2235]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffcf8252ce0 a2=0 a3=7ffcf8252ccc items=0 ppid=2154 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.761000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:36:10.763000 audit[2237]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2237 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.763000 audit[2237]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffff09c6740 a2=0 a3=7ffff09c672c items=0 ppid=2154 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:36:10.766000 audit[2239]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2239 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.766000 audit[2239]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd04864080 a2=0 a3=7ffd0486406c items=0 ppid=2154 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.766000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:36:10.768597 systemd-networkd[1517]: docker0: Link UP May 17 00:36:10.779000 audit[2243]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.779000 audit[2243]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf674f1c0 a2=0 a3=7ffcf674f1ac items=0 ppid=2154 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.779000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:36:10.784000 audit[2244]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:10.784000 audit[2244]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffddeb9520 a2=0 a3=7fffddeb950c items=0 ppid=2154 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.784000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:36:10.785924 env[2154]: time="2025-05-17T00:36:10.785886668Z" level=info msg="Loading containers: done." May 17 00:36:10.805700 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3316554659-merged.mount: Deactivated successfully. May 17 00:36:10.820260 env[2154]: time="2025-05-17T00:36:10.820181064Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:36:10.820671 env[2154]: time="2025-05-17T00:36:10.820524115Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:36:10.820671 env[2154]: time="2025-05-17T00:36:10.820640493Z" level=info msg="Daemon has completed initialization" May 17 00:36:10.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.844651 systemd[1]: Started docker.service. May 17 00:36:10.854631 env[2154]: time="2025-05-17T00:36:10.854559908Z" level=info msg="API listen on /run/docker.sock" May 17 00:36:12.123299 env[1842]: time="2025-05-17T00:36:12.123248849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:36:12.710991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923063183.mount: Deactivated successfully. May 17 00:36:14.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.035880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:36:14.036161 systemd[1]: Stopped kubelet.service. May 17 00:36:14.038381 systemd[1]: Starting kubelet.service... May 17 00:36:14.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.282467 systemd[1]: Started kubelet.service. May 17 00:36:14.365917 kubelet[2283]: E0517 00:36:14.365769 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:14.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:36:14.370654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:14.370917 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:14.531958 env[1842]: time="2025-05-17T00:36:14.531869260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:14.534517 env[1842]: time="2025-05-17T00:36:14.534475197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:14.536907 env[1842]: time="2025-05-17T00:36:14.536862470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:14.538851 env[1842]: time="2025-05-17T00:36:14.538788512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:14.539615 env[1842]: time="2025-05-17T00:36:14.539568590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:36:14.540688 env[1842]: time="2025-05-17T00:36:14.540656723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:36:16.353065 env[1842]: time="2025-05-17T00:36:16.353002154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:16.356029 env[1842]: time="2025-05-17T00:36:16.355977563Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:16.358367 env[1842]: time="2025-05-17T00:36:16.358316820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:16.362488 env[1842]: time="2025-05-17T00:36:16.362430196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:16.363429 env[1842]: time="2025-05-17T00:36:16.363386883Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:36:16.364154 env[1842]: time="2025-05-17T00:36:16.364121520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:36:17.811520 env[1842]: time="2025-05-17T00:36:17.811470051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:17.822224 env[1842]: time="2025-05-17T00:36:17.822169338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:17.827733 env[1842]: time="2025-05-17T00:36:17.827682297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:17.834530 env[1842]: time="2025-05-17T00:36:17.834484097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:17.835335 env[1842]: time="2025-05-17T00:36:17.835290443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:36:17.836241 env[1842]: time="2025-05-17T00:36:17.836215040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:36:18.875086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059058822.mount: Deactivated successfully. May 17 00:36:19.076806 amazon-ssm-agent[1891]: 2025-05-17 00:36:19 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. May 17 00:36:19.618427 env[1842]: time="2025-05-17T00:36:19.618354985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:19.621207 env[1842]: time="2025-05-17T00:36:19.620811834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:19.623482 env[1842]: time="2025-05-17T00:36:19.623441449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:19.625618 env[1842]: time="2025-05-17T00:36:19.625518915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:19.626306 env[1842]: time="2025-05-17T00:36:19.626169729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:36:19.627190 env[1842]: time="2025-05-17T00:36:19.627155589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:36:20.125054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260722789.mount: Deactivated successfully. May 17 00:36:21.239732 env[1842]: time="2025-05-17T00:36:21.239668459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.243676 env[1842]: time="2025-05-17T00:36:21.243620709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.246551 env[1842]: time="2025-05-17T00:36:21.246504836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.248944 env[1842]: time="2025-05-17T00:36:21.248901995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.249883 env[1842]: time="2025-05-17T00:36:21.249820290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:36:21.250417 env[1842]: time="2025-05-17T00:36:21.250392840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:36:21.737202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135150405.mount: Deactivated successfully. May 17 00:36:21.747104 env[1842]: time="2025-05-17T00:36:21.747018511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.749182 env[1842]: time="2025-05-17T00:36:21.749136155Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.750962 env[1842]: time="2025-05-17T00:36:21.750917248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.754126 env[1842]: time="2025-05-17T00:36:21.754081449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:21.754692 env[1842]: time="2025-05-17T00:36:21.754653584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:36:21.755490 env[1842]: time="2025-05-17T00:36:21.755461325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:36:22.325130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349356656.mount: Deactivated successfully. May 17 00:36:24.535805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:36:24.536113 systemd[1]: Stopped kubelet.service. May 17 00:36:24.538303 systemd[1]: Starting kubelet.service... May 17 00:36:24.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:24.539442 kernel: kauditd_printk_skb: 88 callbacks suppressed May 17 00:36:24.539519 kernel: audit: type=1130 audit(1747442184.535:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:24.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:24.549854 kernel: audit: type=1131 audit(1747442184.535:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:25.013344 env[1842]: time="2025-05-17T00:36:25.013097813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:25.021404 env[1842]: time="2025-05-17T00:36:25.021352877Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:25.028769 env[1842]: time="2025-05-17T00:36:25.028704803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:25.033261 env[1842]: time="2025-05-17T00:36:25.033207444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:25.035187 env[1842]: time="2025-05-17T00:36:25.035127625Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:36:25.422246 systemd[1]: Started kubelet.service. May 17 00:36:25.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:25.428202 kernel: audit: type=1130 audit(1747442185.422:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:25.498136 kubelet[2304]: E0517 00:36:25.498082 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:25.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:36:25.500113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:25.500273 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:25.504862 kernel: audit: type=1131 audit(1747442185.499:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:36:28.376510 systemd[1]: Stopped kubelet.service. May 17 00:36:28.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:28.380420 systemd[1]: Starting kubelet.service... May 17 00:36:28.384009 kernel: audit: type=1130 audit(1747442188.376:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:28.384133 kernel: audit: type=1131 audit(1747442188.376:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:28.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:28.431162 systemd[1]: Reloading. May 17 00:36:28.523555 /usr/lib/systemd/system-generators/torcx-generator[2350]: time="2025-05-17T00:36:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:28.523597 /usr/lib/systemd/system-generators/torcx-generator[2350]: time="2025-05-17T00:36:28Z" level=info msg="torcx already run" May 17 00:36:28.678557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:28.678585 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:28.704468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:28.844045 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:36:28.844163 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:36:28.844584 systemd[1]: Stopped kubelet.service. May 17 00:36:28.852537 kernel: audit: type=1130 audit(1747442188.843:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:36:28.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:36:28.847099 systemd[1]: Starting kubelet.service... May 17 00:36:29.082418 systemd[1]: Started kubelet.service. May 17 00:36:29.087966 kernel: audit: type=1130 audit(1747442189.082:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:29.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:29.160793 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:36:29.160793 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:36:29.160793 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:36:29.161370 kubelet[2423]: I0517 00:36:29.160897 2423 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:36:29.505445 kubelet[2423]: I0517 00:36:29.505318 2423 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:36:29.505627 kubelet[2423]: I0517 00:36:29.505610 2423 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:36:29.506396 kubelet[2423]: I0517 00:36:29.506371 2423 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:36:29.515246 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:36:29.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:29.520058 kernel: audit: type=1131 audit(1747442189.514:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:29.556504 kubelet[2423]: E0517 00:36:29.556462 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:29.557931 kubelet[2423]: I0517 00:36:29.557899 2423 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:36:29.570145 kubelet[2423]: E0517 00:36:29.570093 2423 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:36:29.570145 kubelet[2423]: I0517 00:36:29.570127 2423 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:36:29.575643 kubelet[2423]: I0517 00:36:29.575604 2423 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:36:29.576203 kubelet[2423]: I0517 00:36:29.576166 2423 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:36:29.576377 kubelet[2423]: I0517 00:36:29.576334 2423 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:36:29.576603 kubelet[2423]: I0517 00:36:29.576378 2423 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:36:29.576755 kubelet[2423]: I0517 00:36:29.576612 2423 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:36:29.576755 kubelet[2423]: I0517 00:36:29.576625 2423 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:36:29.576898 kubelet[2423]: I0517 00:36:29.576769 2423 state_mem.go:36] "Initialized new in-memory state store" May 17 00:36:29.589597 kubelet[2423]: I0517 00:36:29.589549 2423 kubelet.go:408] "Attempting to sync node with API server" May 17 00:36:29.589597 kubelet[2423]: I0517 00:36:29.589605 2423 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:36:29.589788 kubelet[2423]: I0517 00:36:29.589645 2423 kubelet.go:314] "Adding apiserver pod source" May 17 00:36:29.589788 kubelet[2423]: I0517 00:36:29.589667 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:36:29.600114 kubelet[2423]: W0517 00:36:29.600048 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:29.600294 kubelet[2423]: E0517 00:36:29.600128 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:29.600294 kubelet[2423]: I0517 00:36:29.600251 2423 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:36:29.601762 kubelet[2423]: I0517 00:36:29.601730 2423 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:36:29.612686 kubelet[2423]: W0517 00:36:29.612630 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:36:29.619700 kubelet[2423]: I0517 00:36:29.619663 2423 server.go:1274] "Started kubelet" May 17 00:36:29.627083 kubelet[2423]: W0517 00:36:29.627027 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:29.627224 kubelet[2423]: E0517 00:36:29.627094 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:29.627257 kubelet[2423]: I0517 00:36:29.627214 2423 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:36:29.631341 kubelet[2423]: I0517 00:36:29.631284 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:36:29.632022 kubelet[2423]: I0517 00:36:29.632003 2423 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:36:29.644250 kernel: audit: type=1400 audit(1747442189.634:212): avc: denied { mac_admin } for pid=2423 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:29.644368 kernel: audit: type=1401 audit(1747442189.634:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:29.634000 audit[2423]: AVC avc: denied { mac_admin } for pid=2423 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:29.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:29.644517 kubelet[2423]: I0517 00:36:29.635449 2423 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:36:29.644517 kubelet[2423]: I0517 00:36:29.635487 2423 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:36:29.644517 kubelet[2423]: I0517 00:36:29.635550 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:36:29.634000 audit[2423]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009c1fb0 a1=c0009cde48 a2=c0009c1f80 a3=25 items=0 ppid=1 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.660573 kernel: audit: type=1300 audit(1747442189.634:212): arch=c000003e syscall=188 success=no exit=-22 a0=c0009c1fb0 a1=c0009cde48 a2=c0009c1f80 a3=25 items=0 ppid=1 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.660665 kernel: audit: type=1327 audit(1747442189.634:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:29.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:29.660741 kubelet[2423]: I0517 00:36:29.650693 2423 server.go:449] "Adding debug handlers to kubelet server" May 17 00:36:29.660741 kubelet[2423]: E0517 00:36:29.650217 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.143:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-143.18402969c1cf0ab5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-143,UID:ip-172-31-26-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-143,},FirstTimestamp:2025-05-17 00:36:29.619628725 +0000 UTC m=+0.527229189,LastTimestamp:2025-05-17 00:36:29.619628725 +0000 UTC m=+0.527229189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-143,}" May 17 00:36:29.660741 kubelet[2423]: E0517 00:36:29.657623 2423 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:36:29.660741 kubelet[2423]: I0517 00:36:29.657797 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:36:29.634000 audit[2423]: AVC avc: denied { mac_admin } for pid=2423 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:29.666306 kubelet[2423]: I0517 00:36:29.661194 2423 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:36:29.666306 kubelet[2423]: E0517 00:36:29.661456 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:29.666306 kubelet[2423]: I0517 00:36:29.662527 2423 factory.go:221] Registration of the systemd container factory successfully May 17 00:36:29.666306 kubelet[2423]: I0517 00:36:29.662633 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:36:29.666306 kubelet[2423]: E0517 00:36:29.662990 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": dial tcp 172.31.26.143:6443: connect: connection refused" interval="200ms" May 17 00:36:29.666306 kubelet[2423]: I0517 00:36:29.664070 2423 factory.go:221] Registration of the containerd container factory successfully May 17 00:36:29.669631 kernel: audit: type=1400 audit(1747442189.634:213): avc: denied { mac_admin } for pid=2423 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:29.669753 kernel: audit: type=1401 audit(1747442189.634:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:29.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:29.634000 audit[2423]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009dd760 a1=c0009cde60 a2=c000b02060 a3=25 items=0 ppid=1 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.684212 kernel: audit: type=1300 audit(1747442189.634:213): arch=c000003e syscall=188 success=no exit=-22 a0=c0009dd760 a1=c0009cde60 a2=c000b02060 a3=25 items=0 ppid=1 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.684327 kernel: audit: type=1327 audit(1747442189.634:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:29.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:29.684395 kubelet[2423]: I0517 00:36:29.678603 2423 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:36:29.684395 kubelet[2423]: I0517 00:36:29.678669 2423 reconciler.go:26] "Reconciler: start to sync state" May 17 00:36:29.688238 kernel: audit: type=1325 audit(1747442189.642:214): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.642000 audit[2437]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.697586 kernel: audit: type=1300 audit(1747442189.642:214): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff78e7ae20 a2=0 a3=7fff78e7ae0c items=0 ppid=2423 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.642000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff78e7ae20 a2=0 a3=7fff78e7ae0c items=0 ppid=2423 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.642000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:36:29.670000 audit[2438]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.670000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7392ee30 a2=0 a3=7ffe7392ee1c items=0 ppid=2423 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:36:29.681000 audit[2440]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.681000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc0ad604b0 a2=0 a3=7ffc0ad6049c items=0 ppid=2423 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:36:29.683000 audit[2442]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.683000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff1f65c270 a2=0 a3=7fff1f65c25c items=0 ppid=2423 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:36:29.704439 kubelet[2423]: W0517 00:36:29.704240 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:29.704995 kubelet[2423]: E0517 00:36:29.704670 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:29.710000 audit[2447]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.710000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe915b0ca0 a2=0 a3=7ffe915b0c8c items=0 ppid=2423 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:36:29.711506 kubelet[2423]: I0517 00:36:29.711462 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:36:29.713000 audit[2450]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:29.713000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb7878400 a2=0 a3=7ffcb78783ec items=0 ppid=2423 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:36:29.714677 kubelet[2423]: I0517 00:36:29.714583 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:36:29.714677 kubelet[2423]: I0517 00:36:29.714610 2423 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:36:29.714677 kubelet[2423]: I0517 00:36:29.714633 2423 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:36:29.714861 kubelet[2423]: E0517 00:36:29.714689 2423 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:36:29.715643 kubelet[2423]: I0517 00:36:29.715624 2423 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:36:29.715785 kubelet[2423]: I0517 00:36:29.715772 2423 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:36:29.715937 kubelet[2423]: I0517 00:36:29.715927 2423 state_mem.go:36] "Initialized new in-memory state store" May 17 00:36:29.716314 kubelet[2423]: W0517 00:36:29.716233 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:29.716409 kubelet[2423]: E0517 00:36:29.716331 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:29.716000 audit[2453]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:29.716000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda7fd9e20 a2=0 a3=7ffda7fd9e0c items=0 ppid=2423 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:36:29.717000 audit[2451]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.717000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb6c807c0 a2=0 a3=7ffcb6c807ac items=0 ppid=2423 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:36:29.717000 audit[2454]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:29.717000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcf12ae9c0 a2=0 a3=7ffcf12ae9ac items=0 ppid=2423 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:36:29.719000 audit[2455]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.719000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffac4f9c0 a2=0 a3=7ffffac4f9ac items=0 ppid=2423 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.719000 audit[2456]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:29.719000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd04f7570 a2=0 a3=10e3 items=0 ppid=2423 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:36:29.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:36:29.721000 audit[2457]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:29.721000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5f952630 a2=0 a3=7fff5f95261c items=0 ppid=2423 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:36:29.724057 kubelet[2423]: I0517 00:36:29.723969 2423 policy_none.go:49] "None policy: Start" May 17 00:36:29.725038 kubelet[2423]: I0517 00:36:29.725021 2423 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:36:29.725447 kubelet[2423]: I0517 00:36:29.725434 2423 state_mem.go:35] "Initializing new in-memory state store" May 17 00:36:29.733000 audit[2423]: AVC avc: denied { mac_admin } for pid=2423 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:29.733000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:29.733000 audit[2423]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c46600 a1=c000df5368 a2=c000c465d0 a3=25 items=0 ppid=1 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:29.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:29.735527 kubelet[2423]: I0517 00:36:29.734316 2423 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:36:29.735527 kubelet[2423]: I0517 00:36:29.734393 2423 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:36:29.735527 kubelet[2423]: I0517 00:36:29.734534 2423 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:36:29.735527 kubelet[2423]: I0517 00:36:29.734547 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:36:29.736722 kubelet[2423]: I0517 00:36:29.736702 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:36:29.738210 kubelet[2423]: E0517 00:36:29.738182 2423 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-143\" not found" May 17 00:36:29.836242 kubelet[2423]: I0517 00:36:29.836208 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:29.836814 kubelet[2423]: E0517 00:36:29.836775 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.143:6443/api/v1/nodes\": dial tcp 172.31.26.143:6443: connect: connection refused" node="ip-172-31-26-143" May 17 00:36:29.864616 kubelet[2423]: E0517 00:36:29.864563 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": dial tcp 172.31.26.143:6443: connect: connection refused" interval="400ms" May 17 00:36:29.980399 kubelet[2423]: I0517 00:36:29.980244 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:29.980689 kubelet[2423]: I0517 00:36:29.980657 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:29.980787 kubelet[2423]: I0517 00:36:29.980706 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:29.980787 kubelet[2423]: I0517 00:36:29.980739 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f29401d298596df7da697f329fe35b4-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-143\" (UID: \"9f29401d298596df7da697f329fe35b4\") " pod="kube-system/kube-scheduler-ip-172-31-26-143" May 17 00:36:29.980787 kubelet[2423]: I0517 00:36:29.980765 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-ca-certs\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:29.980969 kubelet[2423]: I0517 00:36:29.980792 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:29.980969 kubelet[2423]: I0517 00:36:29.980817 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:29.980969 kubelet[2423]: I0517 00:36:29.980865 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:29.980969 kubelet[2423]: I0517 00:36:29.980891 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:30.039208 kubelet[2423]: I0517 00:36:30.039171 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:30.039588 kubelet[2423]: E0517 00:36:30.039545 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.143:6443/api/v1/nodes\": dial tcp 172.31.26.143:6443: connect: connection refused" node="ip-172-31-26-143" May 17 00:36:30.124114 env[1842]: time="2025-05-17T00:36:30.123577479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-143,Uid:e281a640983a97c3f7d174bd27d531c1,Namespace:kube-system,Attempt:0,}" May 17 00:36:30.125019 env[1842]: time="2025-05-17T00:36:30.124880657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-143,Uid:8c40e8e640dff3343760b1892e6f4318,Namespace:kube-system,Attempt:0,}" May 17 00:36:30.132166 env[1842]: time="2025-05-17T00:36:30.132120449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-143,Uid:9f29401d298596df7da697f329fe35b4,Namespace:kube-system,Attempt:0,}" May 17 00:36:30.265275 kubelet[2423]: E0517 00:36:30.265215 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": dial tcp 172.31.26.143:6443: connect: connection refused" interval="800ms" May 17 00:36:30.416975 kubelet[2423]: W0517 00:36:30.416786 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:30.416975 kubelet[2423]: E0517 00:36:30.416901 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:30.441867 kubelet[2423]: I0517 00:36:30.441805 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:30.442209 kubelet[2423]: E0517 00:36:30.442176 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.143:6443/api/v1/nodes\": dial tcp 172.31.26.143:6443: connect: connection refused" node="ip-172-31-26-143" May 17 00:36:30.617211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527052980.mount: Deactivated successfully. May 17 00:36:30.636187 env[1842]: time="2025-05-17T00:36:30.636132885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.638189 env[1842]: time="2025-05-17T00:36:30.638112367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.644070 env[1842]: time="2025-05-17T00:36:30.644022702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.646824 env[1842]: time="2025-05-17T00:36:30.646690260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.648867 env[1842]: time="2025-05-17T00:36:30.648812823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.650900 env[1842]: time="2025-05-17T00:36:30.650863223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.652635 env[1842]: time="2025-05-17T00:36:30.652602423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.654717 env[1842]: time="2025-05-17T00:36:30.654678975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.659458 env[1842]: time="2025-05-17T00:36:30.659419365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.661497 env[1842]: time="2025-05-17T00:36:30.661432995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.668076 env[1842]: time="2025-05-17T00:36:30.667236785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.682315 env[1842]: time="2025-05-17T00:36:30.682266984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:30.709719 env[1842]: time="2025-05-17T00:36:30.709622757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:30.709719 env[1842]: time="2025-05-17T00:36:30.709665477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:30.709719 env[1842]: time="2025-05-17T00:36:30.709678323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:30.710004 env[1842]: time="2025-05-17T00:36:30.709804182Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e72c95a4bef3ce09ae6f919cd2c7228f5da1b95406909980005e23b1409d3ed pid=2465 runtime=io.containerd.runc.v2 May 17 00:36:30.723482 env[1842]: time="2025-05-17T00:36:30.723239185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:30.723482 env[1842]: time="2025-05-17T00:36:30.723303019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:30.723482 env[1842]: time="2025-05-17T00:36:30.723319116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:30.724137 env[1842]: time="2025-05-17T00:36:30.724051629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a pid=2482 runtime=io.containerd.runc.v2 May 17 00:36:30.775625 env[1842]: time="2025-05-17T00:36:30.775533228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:30.776079 env[1842]: time="2025-05-17T00:36:30.776020869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:30.776310 env[1842]: time="2025-05-17T00:36:30.776246307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:30.776757 env[1842]: time="2025-05-17T00:36:30.776694243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72 pid=2526 runtime=io.containerd.runc.v2 May 17 00:36:30.838165 env[1842]: time="2025-05-17T00:36:30.838109053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-143,Uid:e281a640983a97c3f7d174bd27d531c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e72c95a4bef3ce09ae6f919cd2c7228f5da1b95406909980005e23b1409d3ed\"" May 17 00:36:30.842268 env[1842]: time="2025-05-17T00:36:30.842219035Z" level=info msg="CreateContainer within sandbox \"7e72c95a4bef3ce09ae6f919cd2c7228f5da1b95406909980005e23b1409d3ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:36:30.872823 env[1842]: time="2025-05-17T00:36:30.872661925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-143,Uid:8c40e8e640dff3343760b1892e6f4318,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a\"" May 17 00:36:30.876775 env[1842]: time="2025-05-17T00:36:30.876729256Z" level=info msg="CreateContainer within sandbox \"4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:36:30.887614 env[1842]: time="2025-05-17T00:36:30.887560047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-143,Uid:9f29401d298596df7da697f329fe35b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72\"" May 17 00:36:30.890614 env[1842]: time="2025-05-17T00:36:30.890570824Z" level=info msg="CreateContainer within sandbox \"4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:36:30.895419 env[1842]: time="2025-05-17T00:36:30.895362208Z" level=info msg="CreateContainer within sandbox \"7e72c95a4bef3ce09ae6f919cd2c7228f5da1b95406909980005e23b1409d3ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c88da5242dc893feec089dd047268df29265681fc0ced8b3cafcccc3d8f38ef5\"" May 17 00:36:30.896174 env[1842]: time="2025-05-17T00:36:30.896134672Z" level=info msg="StartContainer for \"c88da5242dc893feec089dd047268df29265681fc0ced8b3cafcccc3d8f38ef5\"" May 17 00:36:30.918667 env[1842]: time="2025-05-17T00:36:30.918527184Z" level=info msg="CreateContainer within sandbox \"4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8\"" May 17 00:36:30.919863 env[1842]: time="2025-05-17T00:36:30.919788770Z" level=info msg="StartContainer for \"7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8\"" May 17 00:36:30.927549 kubelet[2423]: W0517 00:36:30.927392 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:30.927549 kubelet[2423]: E0517 00:36:30.927494 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:30.946441 env[1842]: time="2025-05-17T00:36:30.946377432Z" level=info msg="CreateContainer within sandbox \"4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2\"" May 17 00:36:30.949945 env[1842]: time="2025-05-17T00:36:30.949889456Z" level=info msg="StartContainer for \"4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2\"" May 17 00:36:31.046503 env[1842]: time="2025-05-17T00:36:31.046451546Z" level=info msg="StartContainer for \"c88da5242dc893feec089dd047268df29265681fc0ced8b3cafcccc3d8f38ef5\" returns successfully" May 17 00:36:31.051166 env[1842]: time="2025-05-17T00:36:31.051115555Z" level=info msg="StartContainer for \"7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8\" returns successfully" May 17 00:36:31.066361 kubelet[2423]: E0517 00:36:31.066285 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": dial tcp 172.31.26.143:6443: connect: connection refused" interval="1.6s" May 17 00:36:31.084408 kubelet[2423]: W0517 00:36:31.084198 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:31.084408 kubelet[2423]: E0517 00:36:31.084341 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:31.087002 kubelet[2423]: W0517 00:36:31.086855 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:31.087002 kubelet[2423]: E0517 00:36:31.086953 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:31.135318 env[1842]: time="2025-05-17T00:36:31.135259370Z" level=info msg="StartContainer for \"4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2\" returns successfully" May 17 00:36:31.246568 kubelet[2423]: I0517 00:36:31.245111 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:31.246568 kubelet[2423]: E0517 00:36:31.245594 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.143:6443/api/v1/nodes\": dial tcp 172.31.26.143:6443: connect: connection refused" node="ip-172-31-26-143" May 17 00:36:31.700430 kubelet[2423]: E0517 00:36:31.700388 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:32.667146 kubelet[2423]: E0517 00:36:32.667090 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": dial tcp 172.31.26.143:6443: connect: connection refused" interval="3.2s" May 17 00:36:32.815044 kubelet[2423]: W0517 00:36:32.814997 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:32.815573 kubelet[2423]: E0517 00:36:32.815057 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:32.849073 kubelet[2423]: I0517 00:36:32.849034 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:32.850548 kubelet[2423]: E0517 00:36:32.850502 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.143:6443/api/v1/nodes\": dial tcp 172.31.26.143:6443: connect: connection refused" node="ip-172-31-26-143" May 17 00:36:32.934735 kubelet[2423]: W0517 00:36:32.934609 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0": dial tcp 172.31.26.143:6443: connect: connection refused May 17 00:36:32.934735 kubelet[2423]: E0517 00:36:32.934666 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-143&limit=500&resourceVersion=0\": dial tcp 172.31.26.143:6443: connect: connection refused" logger="UnhandledError" May 17 00:36:34.940075 kubelet[2423]: E0517 00:36:34.939951 2423 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-26-143" not found May 17 00:36:35.298443 kubelet[2423]: E0517 00:36:35.298370 2423 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-26-143" not found May 17 00:36:35.745115 kubelet[2423]: E0517 00:36:35.745012 2423 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-26-143" not found May 17 00:36:35.874499 kubelet[2423]: E0517 00:36:35.874433 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-143\" not found" node="ip-172-31-26-143" May 17 00:36:36.053158 kubelet[2423]: I0517 00:36:36.053114 2423 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:36.058572 kubelet[2423]: I0517 00:36:36.058541 2423 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-143" May 17 00:36:36.058786 kubelet[2423]: E0517 00:36:36.058771 2423 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-143\": node \"ip-172-31-26-143\" not found" May 17 00:36:36.072319 kubelet[2423]: E0517 00:36:36.072270 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.173042 kubelet[2423]: E0517 00:36:36.173007 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.274092 kubelet[2423]: E0517 00:36:36.274047 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.375002 kubelet[2423]: E0517 00:36:36.374875 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.475583 kubelet[2423]: E0517 00:36:36.475534 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.576526 kubelet[2423]: E0517 00:36:36.576480 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.581277 systemd[1]: Reloading. May 17 00:36:36.676971 kubelet[2423]: E0517 00:36:36.676820 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.695613 /usr/lib/systemd/system-generators/torcx-generator[2714]: time="2025-05-17T00:36:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:36.695652 /usr/lib/systemd/system-generators/torcx-generator[2714]: time="2025-05-17T00:36:36Z" level=info msg="torcx already run" May 17 00:36:36.777576 kubelet[2423]: E0517 00:36:36.777532 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.800778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:36.800803 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:36.820694 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:36.878243 kubelet[2423]: E0517 00:36:36.878196 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-143\" not found" May 17 00:36:36.932264 systemd[1]: Stopping kubelet.service... May 17 00:36:36.950024 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:36:36.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.950728 systemd[1]: Stopped kubelet.service. May 17 00:36:36.961309 kernel: kauditd_printk_skb: 38 callbacks suppressed May 17 00:36:36.961491 kernel: audit: type=1131 audit(1747442196.949:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:36.962222 systemd[1]: Starting kubelet.service... May 17 00:36:38.745659 systemd[1]: Started kubelet.service. May 17 00:36:38.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:38.754932 kernel: audit: type=1130 audit(1747442198.745:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:38.872216 kubelet[2785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:36:38.872746 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:36:38.872816 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:36:38.873031 kubelet[2785]: I0517 00:36:38.872993 2785 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:36:38.893714 kubelet[2785]: I0517 00:36:38.893680 2785 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:36:38.893966 kubelet[2785]: I0517 00:36:38.893948 2785 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:36:38.894420 kubelet[2785]: I0517 00:36:38.894403 2785 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:36:38.897374 kubelet[2785]: I0517 00:36:38.897344 2785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:36:38.901925 kubelet[2785]: I0517 00:36:38.901888 2785 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:36:38.966512 kubelet[2785]: E0517 00:36:38.965765 2785 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:36:38.966512 kubelet[2785]: I0517 00:36:38.965802 2785 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:36:38.973791 kubelet[2785]: I0517 00:36:38.973755 2785 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:36:38.974667 kubelet[2785]: I0517 00:36:38.974628 2785 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:36:38.974859 kubelet[2785]: I0517 00:36:38.974794 2785 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:36:38.975078 kubelet[2785]: I0517 00:36:38.974853 2785 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:36:38.975216 kubelet[2785]: I0517 00:36:38.975091 2785 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:36:38.975216 kubelet[2785]: I0517 00:36:38.975106 2785 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:36:38.975216 kubelet[2785]: I0517 00:36:38.975148 2785 state_mem.go:36] "Initialized new in-memory state store" May 17 00:36:38.975350 kubelet[2785]: I0517 00:36:38.975286 2785 kubelet.go:408] "Attempting to sync node with API server" May 17 00:36:38.975350 kubelet[2785]: I0517 00:36:38.975310 2785 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:36:38.975436 kubelet[2785]: I0517 00:36:38.975359 2785 kubelet.go:314] "Adding apiserver pod source" May 17 00:36:38.975436 kubelet[2785]: I0517 00:36:38.975377 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:36:39.009941 kubelet[2785]: I0517 00:36:39.009809 2785 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:36:39.017635 kubelet[2785]: I0517 00:36:39.011692 2785 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:36:39.017635 kubelet[2785]: I0517 00:36:39.012307 2785 server.go:1274] "Started kubelet" May 17 00:36:39.018109 kubelet[2785]: I0517 00:36:39.018008 2785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:36:39.032013 kernel: audit: type=1400 audit(1747442199.023:229): avc: denied { mac_admin } for pid=2785 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:39.023000 audit[2785]: AVC avc: denied { mac_admin } for pid=2785 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:39.032323 kubelet[2785]: I0517 00:36:39.027896 2785 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:36:39.032323 kubelet[2785]: I0517 00:36:39.024438 2785 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:36:39.032323 kubelet[2785]: I0517 00:36:39.028065 2785 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:36:39.032323 kubelet[2785]: I0517 00:36:39.028102 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:36:39.032323 kubelet[2785]: I0517 00:36:39.018943 2785 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:36:39.023000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:39.047635 kernel: audit: type=1401 audit(1747442199.023:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:39.047812 kernel: audit: type=1300 audit(1747442199.023:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000bc49f0 a1=c0009c0ed0 a2=c000bc49c0 a3=25 items=0 ppid=1 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:39.023000 audit[2785]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bc49f0 a1=c0009c0ed0 a2=c000bc49c0 a3=25 items=0 ppid=1 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:39.048049 kubelet[2785]: I0517 00:36:39.041126 2785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:36:39.023000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:39.057100 kubelet[2785]: I0517 00:36:39.050291 2785 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:36:39.057100 kubelet[2785]: I0517 00:36:39.051243 2785 server.go:449] "Adding debug handlers to kubelet server" May 17 00:36:39.057100 kubelet[2785]: I0517 00:36:39.056446 2785 factory.go:221] Registration of the systemd container factory successfully May 17 00:36:39.057100 kubelet[2785]: I0517 00:36:39.056573 2785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:36:39.057957 kernel: audit: type=1327 audit(1747442199.023:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:39.023000 audit[2785]: AVC avc: denied { mac_admin } for pid=2785 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:39.065367 kubelet[2785]: I0517 00:36:39.063911 2785 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:36:39.065367 kubelet[2785]: I0517 00:36:39.064082 2785 reconciler.go:26] "Reconciler: start to sync state" May 17 00:36:39.066506 kernel: audit: type=1400 audit(1747442199.023:230): avc: denied { mac_admin } for pid=2785 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:39.023000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:39.070978 kubelet[2785]: I0517 00:36:39.069551 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:36:39.070978 kubelet[2785]: I0517 00:36:39.070904 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:36:39.070978 kubelet[2785]: I0517 00:36:39.070938 2785 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:36:39.070978 kubelet[2785]: I0517 00:36:39.070964 2785 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:36:39.071199 kubelet[2785]: E0517 00:36:39.071027 2785 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:36:39.071951 kernel: audit: type=1401 audit(1747442199.023:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:39.023000 audit[2785]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009f8be0 a1=c0009c0f00 a2=c000bc4c90 a3=25 items=0 ppid=1 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:39.091048 kernel: audit: type=1300 audit(1747442199.023:230): arch=c000003e syscall=188 success=no exit=-22 a0=c0009f8be0 a1=c0009c0f00 a2=c000bc4c90 a3=25 items=0 ppid=1 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:39.091238 kernel: audit: type=1327 audit(1747442199.023:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:39.023000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:39.091355 kubelet[2785]: E0517 00:36:39.089815 2785 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:36:39.091780 kubelet[2785]: I0517 00:36:39.091524 2785 factory.go:221] Registration of the containerd container factory successfully May 17 00:36:39.171180 kubelet[2785]: E0517 00:36:39.171101 2785 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:36:39.182171 kubelet[2785]: I0517 00:36:39.182138 2785 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:36:39.182171 kubelet[2785]: I0517 00:36:39.182160 2785 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:36:39.182379 kubelet[2785]: I0517 00:36:39.182185 2785 state_mem.go:36] "Initialized new in-memory state store" May 17 00:36:39.182488 kubelet[2785]: I0517 00:36:39.182465 2785 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:36:39.182554 kubelet[2785]: I0517 00:36:39.182486 2785 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:36:39.182554 kubelet[2785]: I0517 00:36:39.182532 2785 policy_none.go:49] "None policy: Start" May 17 00:36:39.183487 kubelet[2785]: I0517 00:36:39.183459 2785 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:36:39.183487 kubelet[2785]: I0517 00:36:39.183490 2785 state_mem.go:35] "Initializing new in-memory state store" May 17 00:36:39.183749 kubelet[2785]: I0517 00:36:39.183731 2785 state_mem.go:75] "Updated machine memory state" May 17 00:36:39.185531 kubelet[2785]: I0517 00:36:39.185500 2785 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:36:39.187000 audit[2785]: AVC avc: denied { mac_admin } for pid=2785 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:36:39.187000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:36:39.187000 audit[2785]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c6f110 a1=c000c379e0 a2=c000c6f0e0 a3=25 items=0 ppid=1 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:39.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:36:39.188368 kubelet[2785]: I0517 00:36:39.188344 2785 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:36:39.188658 kubelet[2785]: I0517 00:36:39.188645 2785 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:36:39.188786 kubelet[2785]: I0517 00:36:39.188747 2785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:36:39.189294 kubelet[2785]: I0517 00:36:39.189277 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:36:39.302074 kubelet[2785]: I0517 00:36:39.302035 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-143" May 17 00:36:39.315138 kubelet[2785]: I0517 00:36:39.315097 2785 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-26-143" May 17 00:36:39.315358 kubelet[2785]: I0517 00:36:39.315217 2785 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-143" May 17 00:36:39.469364 kubelet[2785]: I0517 00:36:39.469284 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:39.469364 kubelet[2785]: I0517 00:36:39.469331 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:39.469364 kubelet[2785]: I0517 00:36:39.469356 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:39.469364 kubelet[2785]: I0517 00:36:39.469374 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f29401d298596df7da697f329fe35b4-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-143\" (UID: \"9f29401d298596df7da697f329fe35b4\") " pod="kube-system/kube-scheduler-ip-172-31-26-143" May 17 00:36:39.469624 kubelet[2785]: I0517 00:36:39.469390 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-ca-certs\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:39.469624 kubelet[2785]: I0517 00:36:39.469418 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:39.469624 kubelet[2785]: I0517 00:36:39.469433 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e281a640983a97c3f7d174bd27d531c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-143\" (UID: \"e281a640983a97c3f7d174bd27d531c1\") " pod="kube-system/kube-apiserver-ip-172-31-26-143" May 17 00:36:39.469624 kubelet[2785]: I0517 00:36:39.469449 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:39.469624 kubelet[2785]: I0517 00:36:39.469463 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c40e8e640dff3343760b1892e6f4318-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-143\" (UID: \"8c40e8e640dff3343760b1892e6f4318\") " pod="kube-system/kube-controller-manager-ip-172-31-26-143" May 17 00:36:39.976766 kubelet[2785]: I0517 00:36:39.976715 2785 apiserver.go:52] "Watching apiserver" May 17 00:36:40.073483 kubelet[2785]: I0517 00:36:40.073450 2785 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:36:40.105467 kubelet[2785]: I0517 00:36:40.105391 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-143" podStartSLOduration=1.105355191 podStartE2EDuration="1.105355191s" podCreationTimestamp="2025-05-17 00:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:40.102079913 +0000 UTC m=+1.325468658" watchObservedRunningTime="2025-05-17 00:36:40.105355191 +0000 UTC m=+1.328743935" May 17 00:36:40.116923 kubelet[2785]: I0517 00:36:40.116853 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-143" podStartSLOduration=1.116814483 podStartE2EDuration="1.116814483s" podCreationTimestamp="2025-05-17 00:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:40.11643411 +0000 UTC m=+1.339822858" watchObservedRunningTime="2025-05-17 00:36:40.116814483 +0000 UTC m=+1.340203232" May 17 00:36:40.160708 kubelet[2785]: I0517 00:36:40.160633 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-143" podStartSLOduration=1.160606843 podStartE2EDuration="1.160606843s" podCreationTimestamp="2025-05-17 00:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:40.13713327 +0000 UTC m=+1.360522016" watchObservedRunningTime="2025-05-17 00:36:40.160606843 +0000 UTC m=+1.383995596" May 17 00:36:41.829647 kubelet[2785]: I0517 00:36:41.829582 2785 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:36:41.830090 env[1842]: time="2025-05-17T00:36:41.829947250Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:36:41.830547 kubelet[2785]: I0517 00:36:41.830514 2785 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:36:42.791955 kubelet[2785]: I0517 00:36:42.791904 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0e0bf43-2cde-467b-a584-3aadc17367cb-xtables-lock\") pod \"kube-proxy-lc8nz\" (UID: \"d0e0bf43-2cde-467b-a584-3aadc17367cb\") " pod="kube-system/kube-proxy-lc8nz" May 17 00:36:42.792180 kubelet[2785]: I0517 00:36:42.791967 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0e0bf43-2cde-467b-a584-3aadc17367cb-lib-modules\") pod \"kube-proxy-lc8nz\" (UID: \"d0e0bf43-2cde-467b-a584-3aadc17367cb\") " pod="kube-system/kube-proxy-lc8nz" May 17 00:36:42.792180 kubelet[2785]: I0517 00:36:42.791995 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxvk\" (UniqueName: \"kubernetes.io/projected/d0e0bf43-2cde-467b-a584-3aadc17367cb-kube-api-access-jqxvk\") pod \"kube-proxy-lc8nz\" (UID: \"d0e0bf43-2cde-467b-a584-3aadc17367cb\") " pod="kube-system/kube-proxy-lc8nz" May 17 00:36:42.792180 kubelet[2785]: I0517 00:36:42.792023 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0e0bf43-2cde-467b-a584-3aadc17367cb-kube-proxy\") pod \"kube-proxy-lc8nz\" (UID: \"d0e0bf43-2cde-467b-a584-3aadc17367cb\") " pod="kube-system/kube-proxy-lc8nz" May 17 00:36:42.900762 kubelet[2785]: I0517 00:36:42.900718 2785 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:36:42.920824 env[1842]: time="2025-05-17T00:36:42.920727496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc8nz,Uid:d0e0bf43-2cde-467b-a584-3aadc17367cb,Namespace:kube-system,Attempt:0,}" May 17 00:36:42.954695 env[1842]: time="2025-05-17T00:36:42.954598109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:42.954695 env[1842]: time="2025-05-17T00:36:42.954658559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:42.954695 env[1842]: time="2025-05-17T00:36:42.954674832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:42.955316 env[1842]: time="2025-05-17T00:36:42.955227347Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d250e114f552fd0a05922bd1b99f8cc68a28c7f02c213afb60ecdcf5fbb9dffd pid=2836 runtime=io.containerd.runc.v2 May 17 00:36:42.992678 kubelet[2785]: I0517 00:36:42.992635 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ml7\" (UniqueName: \"kubernetes.io/projected/73e8066c-f770-4b96-8e83-59c998e1f842-kube-api-access-52ml7\") pod \"tigera-operator-7c5755cdcb-8xdsh\" (UID: \"73e8066c-f770-4b96-8e83-59c998e1f842\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8xdsh" May 17 00:36:42.992886 kubelet[2785]: I0517 00:36:42.992690 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73e8066c-f770-4b96-8e83-59c998e1f842-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-8xdsh\" (UID: \"73e8066c-f770-4b96-8e83-59c998e1f842\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8xdsh" May 17 00:36:43.022889 env[1842]: time="2025-05-17T00:36:43.022824418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc8nz,Uid:d0e0bf43-2cde-467b-a584-3aadc17367cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d250e114f552fd0a05922bd1b99f8cc68a28c7f02c213afb60ecdcf5fbb9dffd\"" May 17 00:36:43.027024 env[1842]: time="2025-05-17T00:36:43.026976842Z" level=info msg="CreateContainer within sandbox \"d250e114f552fd0a05922bd1b99f8cc68a28c7f02c213afb60ecdcf5fbb9dffd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:36:43.090100 env[1842]: time="2025-05-17T00:36:43.089614622Z" level=info msg="CreateContainer within sandbox \"d250e114f552fd0a05922bd1b99f8cc68a28c7f02c213afb60ecdcf5fbb9dffd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bab1bab608a73fc82471132f81a964f45cc812d26367c891f935a67e913bc4a2\"" May 17 00:36:43.092253 env[1842]: time="2025-05-17T00:36:43.092209647Z" level=info msg="StartContainer for \"bab1bab608a73fc82471132f81a964f45cc812d26367c891f935a67e913bc4a2\"" May 17 00:36:43.148357 env[1842]: time="2025-05-17T00:36:43.148312901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8xdsh,Uid:73e8066c-f770-4b96-8e83-59c998e1f842,Namespace:tigera-operator,Attempt:0,}" May 17 00:36:43.178595 env[1842]: time="2025-05-17T00:36:43.178536456Z" level=info msg="StartContainer for \"bab1bab608a73fc82471132f81a964f45cc812d26367c891f935a67e913bc4a2\" returns successfully" May 17 00:36:43.186485 env[1842]: time="2025-05-17T00:36:43.186409843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:43.186703 env[1842]: time="2025-05-17T00:36:43.186655835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:43.186703 env[1842]: time="2025-05-17T00:36:43.186679838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:43.187170 env[1842]: time="2025-05-17T00:36:43.187122826Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc pid=2912 runtime=io.containerd.runc.v2 May 17 00:36:43.262363 env[1842]: time="2025-05-17T00:36:43.262314700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8xdsh,Uid:73e8066c-f770-4b96-8e83-59c998e1f842,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc\"" May 17 00:36:43.266690 env[1842]: time="2025-05-17T00:36:43.266648121Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:36:43.910033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328151355.mount: Deactivated successfully. May 17 00:36:44.578998 update_engine[1832]: I0517 00:36:44.578934 1832 update_attempter.cc:509] Updating boot flags... May 17 00:36:44.685000 audit[3003]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.711805 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:36:44.726910 kernel: audit: type=1325 audit(1747442204.685:232): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.726993 kernel: audit: type=1325 audit(1747442204.688:233): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.727028 kernel: audit: type=1300 audit(1747442204.688:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc305f7c30 a2=0 a3=7ffc305f7c1c items=0 ppid=2892 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.688000 audit[3004]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.688000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc305f7c30 a2=0 a3=7ffc305f7c1c items=0 ppid=2892 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.743636 kernel: audit: type=1327 audit(1747442204.688:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:36:44.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:36:44.760624 kernel: audit: type=1325 audit(1747442204.688:234): table=nat:40 family=10 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.760764 kernel: audit: type=1300 audit(1747442204.688:234): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe94b7b420 a2=0 a3=7ffe94b7b40c items=0 ppid=2892 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.688000 audit[3005]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.688000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe94b7b420 a2=0 a3=7ffe94b7b40c items=0 ppid=2892 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:36:44.775699 kernel: audit: type=1327 audit(1747442204.688:234): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:36:44.775853 kernel: audit: type=1300 audit(1747442204.685:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7866adf0 a2=0 a3=7ffc7866addc items=0 ppid=2892 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.685000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7866adf0 a2=0 a3=7ffc7866addc items=0 ppid=2892 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:36:44.780960 kernel: audit: type=1327 audit(1747442204.685:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:36:44.692000 audit[3006]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.786877 kernel: audit: type=1325 audit(1747442204.692:235): table=filter:41 family=10 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:44.692000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2cdce610 a2=0 a3=7ffc2cdce5fc items=0 ppid=2892 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:36:44.694000 audit[3007]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.694000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2bac7630 a2=0 a3=7fff2bac761c items=0 ppid=2892 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:36:44.697000 audit[3008]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.697000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff447eb710 a2=0 a3=7fff447eb6fc items=0 ppid=2892 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:36:44.800000 audit[3031]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.800000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffb092f310 a2=0 a3=7fffb092f2fc items=0 ppid=2892 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:36:44.808000 audit[3038]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.808000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe0eb8f720 a2=0 a3=7ffe0eb8f70c items=0 ppid=2892 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:36:44.818000 audit[3041]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.818000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd257bf590 a2=0 a3=7ffd257bf57c items=0 ppid=2892 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:36:44.822000 audit[3042]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.822000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf414cca0 a2=0 a3=7ffdf414cc8c items=0 ppid=2892 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:36:44.826000 audit[3047]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.826000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff079a5950 a2=0 a3=7fff079a593c items=0 ppid=2892 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:36:44.829000 audit[3051]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.829000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb495f4c0 a2=0 a3=7fffb495f4ac items=0 ppid=2892 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:36:44.838000 audit[3054]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.838000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc91dd2df0 a2=0 a3=7ffc91dd2ddc items=0 ppid=2892 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:36:44.845000 audit[3059]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.845000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffed29e030 a2=0 a3=7fffed29e01c items=0 ppid=2892 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:36:44.847000 audit[3060]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.847000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcce1b3710 a2=0 a3=7ffcce1b36fc items=0 ppid=2892 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:36:44.850000 audit[3065]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.850000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc36772a50 a2=0 a3=7ffc36772a3c items=0 ppid=2892 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:36:44.853000 audit[3068]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.853000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa6dd7060 a2=0 a3=7fffa6dd704c items=0 ppid=2892 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:36:44.861000 audit[3073]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.861000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeddd46b40 a2=0 a3=7ffeddd46b2c items=0 ppid=2892 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:36:44.874000 audit[3084]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.874000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee49290d0 a2=0 a3=7ffee49290bc items=0 ppid=2892 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:36:44.880000 audit[3088]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.880000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9be635b0 a2=0 a3=7ffc9be6359c items=0 ppid=2892 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:36:44.881000 audit[3090]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.881000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc3f39ca0 a2=0 a3=7ffcc3f39c8c items=0 ppid=2892 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:36:44.888000 audit[3094]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.888000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe6b83830 a2=0 a3=7fffe6b8381c items=0 ppid=2892 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:36:44.899000 audit[3102]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.899000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0aa077f0 a2=0 a3=7ffc0aa077dc items=0 ppid=2892 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:36:44.901000 audit[3103]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.901000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7b356c80 a2=0 a3=7fff7b356c6c items=0 ppid=2892 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:36:44.916000 audit[3106]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:36:44.916000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdfee44290 a2=0 a3=7ffdfee4427c items=0 ppid=2892 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:44.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:36:44.985335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668746445.mount: Deactivated successfully. May 17 00:36:45.022000 audit[3117]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:45.022000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc7668700 a2=0 a3=7ffdc76686ec items=0 ppid=2892 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:45.046000 audit[3117]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:45.046000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdc7668700 a2=0 a3=7ffdc76686ec items=0 ppid=2892 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:45.053000 audit[3131]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.053000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebc8f38e0 a2=0 a3=7ffebc8f38cc items=0 ppid=2892 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:36:45.059000 audit[3133]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.059000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc7991c400 a2=0 a3=7ffc7991c3ec items=0 ppid=2892 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:36:45.065000 audit[3136]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.065000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdd8dd1880 a2=0 a3=7ffdd8dd186c items=0 ppid=2892 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:36:45.068000 audit[3137]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.068000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd68ceb6b0 a2=0 a3=7ffd68ceb69c items=0 ppid=2892 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:36:45.081000 audit[3139]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.081000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3bd132f0 a2=0 a3=7ffc3bd132dc items=0 ppid=2892 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:36:45.084000 audit[3140]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.084000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe86396030 a2=0 a3=7ffe8639601c items=0 ppid=2892 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.084000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:36:45.088000 audit[3142]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.088000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc54b6e960 a2=0 a3=7ffc54b6e94c items=0 ppid=2892 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:36:45.100000 audit[3145]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.100000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffccbdf1ce0 a2=0 a3=7ffccbdf1ccc items=0 ppid=2892 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:36:45.102000 audit[3146]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.102000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff85ca0ff0 a2=0 a3=7fff85ca0fdc items=0 ppid=2892 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:36:45.106000 audit[3148]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.106000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1299b8e0 a2=0 a3=7ffc1299b8cc items=0 ppid=2892 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:36:45.112000 audit[3149]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.112000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff675eccb0 a2=0 a3=7fff675ecc9c items=0 ppid=2892 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:36:45.117000 audit[3151]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.117000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebefeb8b0 a2=0 a3=7ffebefeb89c items=0 ppid=2892 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:36:45.135000 audit[3155]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.135000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6dc919d0 a2=0 a3=7ffe6dc919bc items=0 ppid=2892 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:36:45.145000 audit[3158]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.145000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0a328790 a2=0 a3=7ffd0a32877c items=0 ppid=2892 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:36:45.147000 audit[3159]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.147000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcbf5e450 a2=0 a3=7ffdcbf5e43c items=0 ppid=2892 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:36:45.152000 audit[3162]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.152000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffee9924fe0 a2=0 a3=7ffee9924fcc items=0 ppid=2892 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:36:45.162000 audit[3170]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.162000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd2f2df8f0 a2=0 a3=7ffd2f2df8dc items=0 ppid=2892 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:36:45.164000 audit[3171]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.164000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeec53ca0 a2=0 a3=7ffeeec53c8c items=0 ppid=2892 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:36:45.176000 audit[3173]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.176000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed8405160 a2=0 a3=7ffed840514c items=0 ppid=2892 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:36:45.188000 audit[3174]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.188000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a20afa0 a2=0 a3=7ffe6a20af8c items=0 ppid=2892 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:36:45.197000 audit[3180]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.197000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd27cabae0 a2=0 a3=7ffd27cabacc items=0 ppid=2892 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:36:45.219000 audit[3202]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:36:45.219000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdaddddc60 a2=0 a3=7ffdaddddc4c items=0 ppid=2892 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:36:45.230000 audit[3210]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:36:45.230000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc99832680 a2=0 a3=7ffc9983266c items=0 ppid=2892 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.230000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:45.231000 audit[3210]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:36:45.231000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc99832680 a2=0 a3=7ffc9983266c items=0 ppid=2892 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:45.231000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:46.163914 env[1842]: time="2025-05-17T00:36:46.163852960Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:46.166227 env[1842]: time="2025-05-17T00:36:46.166159692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:46.168180 env[1842]: time="2025-05-17T00:36:46.168146939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:46.172968 env[1842]: time="2025-05-17T00:36:46.172919968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:36:46.173300 env[1842]: time="2025-05-17T00:36:46.173279486Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:46.176577 env[1842]: time="2025-05-17T00:36:46.176448043Z" level=info msg="CreateContainer within sandbox \"b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:36:46.188145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251183835.mount: Deactivated successfully. May 17 00:36:46.196370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811731243.mount: Deactivated successfully. May 17 00:36:46.201205 env[1842]: time="2025-05-17T00:36:46.201163829Z" level=info msg="CreateContainer within sandbox \"b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf\"" May 17 00:36:46.201948 env[1842]: time="2025-05-17T00:36:46.201922271Z" level=info msg="StartContainer for \"9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf\"" May 17 00:36:46.264990 env[1842]: time="2025-05-17T00:36:46.264943402Z" level=info msg="StartContainer for \"9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf\" returns successfully" May 17 00:36:47.177174 kubelet[2785]: I0517 00:36:47.172890 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-8xdsh" podStartSLOduration=2.261673238 podStartE2EDuration="5.172858294s" podCreationTimestamp="2025-05-17 00:36:42 +0000 UTC" firstStartedPulling="2025-05-17 00:36:43.264007078 +0000 UTC m=+4.487395801" lastFinishedPulling="2025-05-17 00:36:46.17519212 +0000 UTC m=+7.398580857" observedRunningTime="2025-05-17 00:36:47.172623125 +0000 UTC m=+8.396011873" watchObservedRunningTime="2025-05-17 00:36:47.172858294 +0000 UTC m=+8.396247032" May 17 00:36:47.177174 kubelet[2785]: I0517 00:36:47.173209 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lc8nz" podStartSLOduration=5.173191574 podStartE2EDuration="5.173191574s" podCreationTimestamp="2025-05-17 00:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:44.174491764 +0000 UTC m=+5.397880509" watchObservedRunningTime="2025-05-17 00:36:47.173191574 +0000 UTC m=+8.396580320" May 17 00:36:49.099873 amazon-ssm-agent[1891]: 2025-05-17 00:36:49 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated May 17 00:36:53.322466 sudo[2143]: pam_unix(sudo:session): session closed for user root May 17 00:36:53.332719 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:36:53.332886 kernel: audit: type=1106 audit(1747442213.321:283): pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:53.321000 audit[2143]: USER_END pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:53.321000 audit[2143]: CRED_DISP pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:53.345852 kernel: audit: type=1104 audit(1747442213.321:284): pid=2143 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:36:53.354195 sshd[2139]: pam_unix(sshd:session): session closed for user core May 17 00:36:53.358000 audit[2139]: USER_END pid=2139 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:53.370857 kernel: audit: type=1106 audit(1747442213.358:285): pid=2139 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:53.374802 systemd[1]: sshd@6-172.31.26.143:22-139.178.68.195:52040.service: Deactivated successfully. May 17 00:36:53.375983 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:36:53.379124 systemd-logind[1830]: Session 7 logged out. Waiting for processes to exit. May 17 00:36:53.381209 systemd-logind[1830]: Removed session 7. May 17 00:36:53.358000 audit[2139]: CRED_DISP pid=2139 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:53.391917 kernel: audit: type=1104 audit(1747442213.358:286): pid=2139 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:36:53.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.143:22-139.178.68.195:52040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:53.401864 kernel: audit: type=1131 audit(1747442213.374:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.143:22-139.178.68.195:52040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:54.282000 audit[3343]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.282000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffecbc86690 a2=0 a3=7ffecbc8667c items=0 ppid=2892 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.298297 kernel: audit: type=1325 audit(1747442214.282:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.298462 kernel: audit: type=1300 audit(1747442214.282:288): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffecbc86690 a2=0 a3=7ffecbc8667c items=0 ppid=2892 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:54.316863 kernel: audit: type=1327 audit(1747442214.282:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:54.301000 audit[3343]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.301000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecbc86690 a2=0 a3=0 items=0 ppid=2892 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.334466 kernel: audit: type=1325 audit(1747442214.301:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.334624 kernel: audit: type=1300 audit(1747442214.301:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecbc86690 a2=0 a3=0 items=0 ppid=2892 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:54.319000 audit[3345]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.319000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc8eacadb0 a2=0 a3=7ffc8eacad9c items=0 ppid=2892 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:54.335000 audit[3345]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:54.335000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8eacadb0 a2=0 a3=0 items=0 ppid=2892 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:54.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:57.827000 audit[3347]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:57.827000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffedbc60570 a2=0 a3=7ffedbc6055c items=0 ppid=2892 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:57.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:57.837000 audit[3347]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:57.837000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedbc60570 a2=0 a3=0 items=0 ppid=2892 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:57.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:57.854000 audit[3349]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:57.854000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffda1846b90 a2=0 a3=7ffda1846b7c items=0 ppid=2892 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:57.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:57.860000 audit[3349]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:57.860000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda1846b90 a2=0 a3=0 items=0 ppid=2892 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:57.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:58.207330 kubelet[2785]: I0517 00:36:58.207156 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbz5k\" (UniqueName: \"kubernetes.io/projected/872962c4-fa90-4da8-96d1-82aebb888004-kube-api-access-tbz5k\") pod \"calico-typha-68555965c-sthsz\" (UID: \"872962c4-fa90-4da8-96d1-82aebb888004\") " pod="calico-system/calico-typha-68555965c-sthsz" May 17 00:36:58.207330 kubelet[2785]: I0517 00:36:58.207212 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872962c4-fa90-4da8-96d1-82aebb888004-tigera-ca-bundle\") pod \"calico-typha-68555965c-sthsz\" (UID: \"872962c4-fa90-4da8-96d1-82aebb888004\") " pod="calico-system/calico-typha-68555965c-sthsz" May 17 00:36:58.207330 kubelet[2785]: I0517 00:36:58.207232 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/872962c4-fa90-4da8-96d1-82aebb888004-typha-certs\") pod \"calico-typha-68555965c-sthsz\" (UID: \"872962c4-fa90-4da8-96d1-82aebb888004\") " pod="calico-system/calico-typha-68555965c-sthsz" May 17 00:36:58.360940 env[1842]: time="2025-05-17T00:36:58.360878515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68555965c-sthsz,Uid:872962c4-fa90-4da8-96d1-82aebb888004,Namespace:calico-system,Attempt:0,}" May 17 00:36:58.390117 env[1842]: time="2025-05-17T00:36:58.389300037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:58.390117 env[1842]: time="2025-05-17T00:36:58.389347416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:58.390117 env[1842]: time="2025-05-17T00:36:58.389357875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:58.390117 env[1842]: time="2025-05-17T00:36:58.389519053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54ec4a753fe3953a741bbb2fc52a362fa75c97754ce0024ab3796506725bc67b pid=3360 runtime=io.containerd.runc.v2 May 17 00:36:58.511329 kubelet[2785]: I0517 00:36:58.511201 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-lib-modules\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.511329 kubelet[2785]: I0517 00:36:58.511292 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-cni-bin-dir\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512098 kubelet[2785]: I0517 00:36:58.512065 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/319e15cf-faeb-4bfd-a81a-6af2f3add954-node-certs\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512301 kubelet[2785]: I0517 00:36:58.512164 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-cni-log-dir\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512301 kubelet[2785]: I0517 00:36:58.512196 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-var-run-calico\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512555 kubelet[2785]: I0517 00:36:58.512244 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmk7\" (UniqueName: \"kubernetes.io/projected/319e15cf-faeb-4bfd-a81a-6af2f3add954-kube-api-access-qmmk7\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512555 kubelet[2785]: I0517 00:36:58.512351 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-flexvol-driver-host\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512555 kubelet[2785]: I0517 00:36:58.512413 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/319e15cf-faeb-4bfd-a81a-6af2f3add954-tigera-ca-bundle\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512555 kubelet[2785]: I0517 00:36:58.512495 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-xtables-lock\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512730 kubelet[2785]: I0517 00:36:58.512524 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-cni-net-dir\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512730 kubelet[2785]: I0517 00:36:58.512592 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-policysync\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.512730 kubelet[2785]: I0517 00:36:58.512623 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/319e15cf-faeb-4bfd-a81a-6af2f3add954-var-lib-calico\") pod \"calico-node-9l9qz\" (UID: \"319e15cf-faeb-4bfd-a81a-6af2f3add954\") " pod="calico-system/calico-node-9l9qz" May 17 00:36:58.525179 env[1842]: time="2025-05-17T00:36:58.525079244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68555965c-sthsz,Uid:872962c4-fa90-4da8-96d1-82aebb888004,Namespace:calico-system,Attempt:0,} returns sandbox id \"54ec4a753fe3953a741bbb2fc52a362fa75c97754ce0024ab3796506725bc67b\"" May 17 00:36:58.530266 env[1842]: time="2025-05-17T00:36:58.530202268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:36:58.617820 kubelet[2785]: E0517 00:36:58.617770 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.617820 kubelet[2785]: W0517 00:36:58.617819 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.618104 kubelet[2785]: E0517 00:36:58.617878 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.623208 kubelet[2785]: E0517 00:36:58.621366 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.623208 kubelet[2785]: W0517 00:36:58.621386 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.623208 kubelet[2785]: E0517 00:36:58.621411 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.630450 kubelet[2785]: E0517 00:36:58.630419 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.630450 kubelet[2785]: W0517 00:36:58.630444 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.630662 kubelet[2785]: E0517 00:36:58.630490 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.785081 kubelet[2785]: E0517 00:36:58.784933 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:36:58.801744 env[1842]: time="2025-05-17T00:36:58.801685097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9l9qz,Uid:319e15cf-faeb-4bfd-a81a-6af2f3add954,Namespace:calico-system,Attempt:0,}" May 17 00:36:58.813882 kubelet[2785]: E0517 00:36:58.813845 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.813882 kubelet[2785]: W0517 00:36:58.813873 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.814822 kubelet[2785]: E0517 00:36:58.814594 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.815085 kubelet[2785]: E0517 00:36:58.815063 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.815164 kubelet[2785]: W0517 00:36:58.815093 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.815164 kubelet[2785]: E0517 00:36:58.815119 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.815429 kubelet[2785]: E0517 00:36:58.815330 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.815429 kubelet[2785]: W0517 00:36:58.815343 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.815429 kubelet[2785]: E0517 00:36:58.815352 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.815915 kubelet[2785]: E0517 00:36:58.815896 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.815915 kubelet[2785]: W0517 00:36:58.815911 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.816012 kubelet[2785]: E0517 00:36:58.815923 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.816166 kubelet[2785]: E0517 00:36:58.816152 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.816214 kubelet[2785]: W0517 00:36:58.816176 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.816214 kubelet[2785]: E0517 00:36:58.816186 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.816465 kubelet[2785]: E0517 00:36:58.816430 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.816465 kubelet[2785]: W0517 00:36:58.816446 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.816465 kubelet[2785]: E0517 00:36:58.816457 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.816768 kubelet[2785]: E0517 00:36:58.816754 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.816768 kubelet[2785]: W0517 00:36:58.816767 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.816869 kubelet[2785]: E0517 00:36:58.816776 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.817106 kubelet[2785]: E0517 00:36:58.817051 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.817323 kubelet[2785]: W0517 00:36:58.817089 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.817383 kubelet[2785]: E0517 00:36:58.817328 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.817677 kubelet[2785]: E0517 00:36:58.817661 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.817736 kubelet[2785]: W0517 00:36:58.817691 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.817736 kubelet[2785]: E0517 00:36:58.817708 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.817937 kubelet[2785]: E0517 00:36:58.817924 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.817984 kubelet[2785]: W0517 00:36:58.817938 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.817984 kubelet[2785]: E0517 00:36:58.817961 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.818442 kubelet[2785]: E0517 00:36:58.818421 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.818442 kubelet[2785]: W0517 00:36:58.818439 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.818530 kubelet[2785]: E0517 00:36:58.818450 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.818660 kubelet[2785]: E0517 00:36:58.818646 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.818660 kubelet[2785]: W0517 00:36:58.818657 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.818732 kubelet[2785]: E0517 00:36:58.818666 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.819611 kubelet[2785]: E0517 00:36:58.818942 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.819611 kubelet[2785]: W0517 00:36:58.818953 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.819611 kubelet[2785]: E0517 00:36:58.818961 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.819611 kubelet[2785]: E0517 00:36:58.819426 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.819611 kubelet[2785]: W0517 00:36:58.819436 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.819611 kubelet[2785]: E0517 00:36:58.819447 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.819918 kubelet[2785]: E0517 00:36:58.819716 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.819918 kubelet[2785]: W0517 00:36:58.819725 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.819918 kubelet[2785]: E0517 00:36:58.819734 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.819996 kubelet[2785]: E0517 00:36:58.819966 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.819996 kubelet[2785]: W0517 00:36:58.819973 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.819996 kubelet[2785]: E0517 00:36:58.819982 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.820300 kubelet[2785]: E0517 00:36:58.820264 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.820300 kubelet[2785]: W0517 00:36:58.820281 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.820300 kubelet[2785]: E0517 00:36:58.820290 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.820904 kubelet[2785]: E0517 00:36:58.820871 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.820904 kubelet[2785]: W0517 00:36:58.820897 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.821021 kubelet[2785]: E0517 00:36:58.820910 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.824105 kubelet[2785]: E0517 00:36:58.824076 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.824304 kubelet[2785]: W0517 00:36:58.824277 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.824439 kubelet[2785]: E0517 00:36:58.824398 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.826072 kubelet[2785]: E0517 00:36:58.826055 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.826191 kubelet[2785]: W0517 00:36:58.826179 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.826277 kubelet[2785]: E0517 00:36:58.826265 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.826716 kubelet[2785]: E0517 00:36:58.826705 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.828883 kubelet[2785]: W0517 00:36:58.828823 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.829045 kubelet[2785]: E0517 00:36:58.829029 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.836448 kubelet[2785]: I0517 00:36:58.836418 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7d455070-45c6-475a-be66-925b4a2071bc-varrun\") pod \"csi-node-driver-z72kr\" (UID: \"7d455070-45c6-475a-be66-925b4a2071bc\") " pod="calico-system/csi-node-driver-z72kr" May 17 00:36:58.837166 kubelet[2785]: E0517 00:36:58.837107 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.841241 kubelet[2785]: W0517 00:36:58.841198 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.841471 kubelet[2785]: E0517 00:36:58.841454 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.841955 kubelet[2785]: E0517 00:36:58.841939 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.842093 kubelet[2785]: W0517 00:36:58.842077 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.842194 kubelet[2785]: E0517 00:36:58.842180 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.843122 kubelet[2785]: E0517 00:36:58.843106 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.843263 kubelet[2785]: W0517 00:36:58.843248 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.843375 kubelet[2785]: E0517 00:36:58.843361 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.843491 kubelet[2785]: I0517 00:36:58.843478 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d455070-45c6-475a-be66-925b4a2071bc-kubelet-dir\") pod \"csi-node-driver-z72kr\" (UID: \"7d455070-45c6-475a-be66-925b4a2071bc\") " pod="calico-system/csi-node-driver-z72kr" May 17 00:36:58.843871 kubelet[2785]: E0517 00:36:58.843856 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.843987 kubelet[2785]: W0517 00:36:58.843972 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.844093 kubelet[2785]: E0517 00:36:58.844080 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.844228 kubelet[2785]: I0517 00:36:58.844212 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7d455070-45c6-475a-be66-925b4a2071bc-socket-dir\") pod \"csi-node-driver-z72kr\" (UID: \"7d455070-45c6-475a-be66-925b4a2071bc\") " pod="calico-system/csi-node-driver-z72kr" May 17 00:36:58.844591 kubelet[2785]: E0517 00:36:58.844574 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.844697 kubelet[2785]: W0517 00:36:58.844683 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.844787 kubelet[2785]: E0517 00:36:58.844776 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.844912 kubelet[2785]: I0517 00:36:58.844897 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2cm\" (UniqueName: \"kubernetes.io/projected/7d455070-45c6-475a-be66-925b4a2071bc-kube-api-access-8s2cm\") pod \"csi-node-driver-z72kr\" (UID: \"7d455070-45c6-475a-be66-925b4a2071bc\") " pod="calico-system/csi-node-driver-z72kr" May 17 00:36:58.845270 kubelet[2785]: E0517 00:36:58.845245 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.846491 kubelet[2785]: W0517 00:36:58.846460 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.846628 kubelet[2785]: E0517 00:36:58.846611 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.846751 kubelet[2785]: I0517 00:36:58.846736 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7d455070-45c6-475a-be66-925b4a2071bc-registration-dir\") pod \"csi-node-driver-z72kr\" (UID: \"7d455070-45c6-475a-be66-925b4a2071bc\") " pod="calico-system/csi-node-driver-z72kr" May 17 00:36:58.847145 kubelet[2785]: E0517 00:36:58.847131 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.847245 kubelet[2785]: W0517 00:36:58.847232 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.847347 kubelet[2785]: E0517 00:36:58.847336 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.847668 kubelet[2785]: E0517 00:36:58.847647 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.847773 kubelet[2785]: W0517 00:36:58.847760 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.847890 kubelet[2785]: E0517 00:36:58.847878 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.848227 kubelet[2785]: E0517 00:36:58.848207 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.850504 kubelet[2785]: W0517 00:36:58.850475 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.850688 kubelet[2785]: E0517 00:36:58.850671 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.854571 kubelet[2785]: E0517 00:36:58.854539 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.854768 kubelet[2785]: W0517 00:36:58.854748 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.854948 kubelet[2785]: E0517 00:36:58.854931 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.866446 kubelet[2785]: E0517 00:36:58.866123 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.866446 kubelet[2785]: W0517 00:36:58.866153 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.866446 kubelet[2785]: E0517 00:36:58.866187 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867037 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.867609 kubelet[2785]: W0517 00:36:58.867057 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867183 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867345 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.867609 kubelet[2785]: W0517 00:36:58.867355 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867367 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867560 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.867609 kubelet[2785]: W0517 00:36:58.867569 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.867609 kubelet[2785]: E0517 00:36:58.867580 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.882877 env[1842]: time="2025-05-17T00:36:58.882774010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:58.883153 env[1842]: time="2025-05-17T00:36:58.883109071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:58.883307 env[1842]: time="2025-05-17T00:36:58.883280458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:58.883722 env[1842]: time="2025-05-17T00:36:58.883680158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f pid=3453 runtime=io.containerd.runc.v2 May 17 00:36:58.899000 audit[3463]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:58.902867 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 00:36:58.903008 kernel: audit: type=1325 audit(1747442218.899:296): table=filter:97 family=2 entries=20 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:58.899000 audit[3463]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff5cf9fb40 a2=0 a3=7fff5cf9fb2c items=0 ppid=2892 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:58.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:58.920757 kernel: audit: type=1300 audit(1747442218.899:296): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff5cf9fb40 a2=0 a3=7fff5cf9fb2c items=0 ppid=2892 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:58.920934 kernel: audit: type=1327 audit(1747442218.899:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:58.922000 audit[3463]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:58.930863 kernel: audit: type=1325 audit(1747442218.922:297): table=nat:98 family=2 entries=12 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:58.931005 kernel: audit: type=1300 audit(1747442218.922:297): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5cf9fb40 a2=0 a3=0 items=0 ppid=2892 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:58.922000 audit[3463]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5cf9fb40 a2=0 a3=0 items=0 ppid=2892 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:58.952855 kernel: audit: type=1327 audit(1747442218.922:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:58.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.947971 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953043 kubelet[2785]: W0517 00:36:58.947995 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948023 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948352 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953043 kubelet[2785]: W0517 00:36:58.948365 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948385 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948664 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953043 kubelet[2785]: W0517 00:36:58.948676 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948694 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953043 kubelet[2785]: E0517 00:36:58.948979 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953551 kubelet[2785]: W0517 00:36:58.948989 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949013 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949304 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953551 kubelet[2785]: W0517 00:36:58.949315 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949336 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949608 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953551 kubelet[2785]: W0517 00:36:58.949626 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949714 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.953551 kubelet[2785]: E0517 00:36:58.949876 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.953551 kubelet[2785]: W0517 00:36:58.949885 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.949974 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950130 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954101 kubelet[2785]: W0517 00:36:58.950139 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950235 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950386 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954101 kubelet[2785]: W0517 00:36:58.950394 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950484 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950628 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954101 kubelet[2785]: W0517 00:36:58.950636 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954101 kubelet[2785]: E0517 00:36:58.950720 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.950931 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954559 kubelet[2785]: W0517 00:36:58.950940 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.950957 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.951166 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954559 kubelet[2785]: W0517 00:36:58.951182 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.951260 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.951446 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.954559 kubelet[2785]: W0517 00:36:58.951454 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.951541 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.954559 kubelet[2785]: E0517 00:36:58.951678 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.962966 kubelet[2785]: W0517 00:36:58.951687 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.951782 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.952058 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.962966 kubelet[2785]: W0517 00:36:58.952067 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.952159 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.952291 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.962966 kubelet[2785]: W0517 00:36:58.952304 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.952384 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.962966 kubelet[2785]: E0517 00:36:58.952531 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.962966 kubelet[2785]: W0517 00:36:58.952540 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.952554 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.952811 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963546 kubelet[2785]: W0517 00:36:58.952897 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.952914 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.953454 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963546 kubelet[2785]: W0517 00:36:58.953468 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.953490 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.953804 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963546 kubelet[2785]: W0517 00:36:58.953817 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963546 kubelet[2785]: E0517 00:36:58.953953 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954130 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963938 kubelet[2785]: W0517 00:36:58.954141 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954233 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954388 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963938 kubelet[2785]: W0517 00:36:58.954401 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954505 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954670 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.963938 kubelet[2785]: W0517 00:36:58.954680 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.954695 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.963938 kubelet[2785]: E0517 00:36:58.955075 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.965195 kubelet[2785]: W0517 00:36:58.955086 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.965195 kubelet[2785]: E0517 00:36:58.955102 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.965195 kubelet[2785]: E0517 00:36:58.955427 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.965195 kubelet[2785]: W0517 00:36:58.955457 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.965195 kubelet[2785]: E0517 00:36:58.955471 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.979343 kubelet[2785]: E0517 00:36:58.978436 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:36:58.979343 kubelet[2785]: W0517 00:36:58.978464 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:36:58.979343 kubelet[2785]: E0517 00:36:58.978492 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:36:58.995247 env[1842]: time="2025-05-17T00:36:58.994665174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9l9qz,Uid:319e15cf-faeb-4bfd-a81a-6af2f3add954,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\"" May 17 00:36:59.319661 systemd[1]: run-containerd-runc-k8s.io-54ec4a753fe3953a741bbb2fc52a362fa75c97754ce0024ab3796506725bc67b-runc.B6TiRd.mount: Deactivated successfully. May 17 00:36:59.901615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148284634.mount: Deactivated successfully. May 17 00:37:00.071744 kubelet[2785]: E0517 00:37:00.071625 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:00.937310 env[1842]: time="2025-05-17T00:37:00.936909839Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.941933 env[1842]: time="2025-05-17T00:37:00.941715407Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.945805 env[1842]: time="2025-05-17T00:37:00.945767218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.948610 env[1842]: time="2025-05-17T00:37:00.948561782Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.949132 env[1842]: time="2025-05-17T00:37:00.949086180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:37:00.963043 env[1842]: time="2025-05-17T00:37:00.962982138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:37:00.972272 env[1842]: time="2025-05-17T00:37:00.972226869Z" level=info msg="CreateContainer within sandbox \"54ec4a753fe3953a741bbb2fc52a362fa75c97754ce0024ab3796506725bc67b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:37:00.997170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204980080.mount: Deactivated successfully. May 17 00:37:01.006672 env[1842]: time="2025-05-17T00:37:01.006594750Z" level=info msg="CreateContainer within sandbox \"54ec4a753fe3953a741bbb2fc52a362fa75c97754ce0024ab3796506725bc67b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ec30f34689392cf446be7339f73565d73be7002d047afb0aa160ff2ba1c3068a\"" May 17 00:37:01.009028 env[1842]: time="2025-05-17T00:37:01.008978720Z" level=info msg="StartContainer for \"ec30f34689392cf446be7339f73565d73be7002d047afb0aa160ff2ba1c3068a\"" May 17 00:37:01.120041 env[1842]: time="2025-05-17T00:37:01.119979559Z" level=info msg="StartContainer for \"ec30f34689392cf446be7339f73565d73be7002d047afb0aa160ff2ba1c3068a\" returns successfully" May 17 00:37:01.265520 kubelet[2785]: E0517 00:37:01.265379 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.265520 kubelet[2785]: W0517 00:37:01.265423 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.265520 kubelet[2785]: E0517 00:37:01.265455 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.266346 kubelet[2785]: E0517 00:37:01.265797 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.266346 kubelet[2785]: W0517 00:37:01.265810 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.266346 kubelet[2785]: E0517 00:37:01.265828 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.266346 kubelet[2785]: E0517 00:37:01.266172 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.266346 kubelet[2785]: W0517 00:37:01.266187 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.266346 kubelet[2785]: E0517 00:37:01.266203 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.266642 kubelet[2785]: E0517 00:37:01.266460 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.266642 kubelet[2785]: W0517 00:37:01.266472 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.266642 kubelet[2785]: E0517 00:37:01.266489 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.266785 kubelet[2785]: E0517 00:37:01.266723 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.266785 kubelet[2785]: W0517 00:37:01.266733 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.266785 kubelet[2785]: E0517 00:37:01.266755 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267059 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.268627 kubelet[2785]: W0517 00:37:01.267073 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267088 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267300 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.268627 kubelet[2785]: W0517 00:37:01.267318 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267330 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267538 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.268627 kubelet[2785]: W0517 00:37:01.267549 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267569 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.268627 kubelet[2785]: E0517 00:37:01.267802 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269246 kubelet[2785]: W0517 00:37:01.267812 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.267842 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.268055 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269246 kubelet[2785]: W0517 00:37:01.268065 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.268083 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.268321 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269246 kubelet[2785]: W0517 00:37:01.268331 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.268357 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269246 kubelet[2785]: E0517 00:37:01.268577 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269246 kubelet[2785]: W0517 00:37:01.268587 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.268609 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.268864 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269668 kubelet[2785]: W0517 00:37:01.268876 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.268889 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.269111 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269668 kubelet[2785]: W0517 00:37:01.269121 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.269141 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.269346 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.269668 kubelet[2785]: W0517 00:37:01.269356 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.269668 kubelet[2785]: E0517 00:37:01.269375 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.287785 kubelet[2785]: E0517 00:37:01.287741 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.287785 kubelet[2785]: W0517 00:37:01.287779 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.288090 kubelet[2785]: E0517 00:37:01.287809 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.288234 kubelet[2785]: E0517 00:37:01.288213 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.288332 kubelet[2785]: W0517 00:37:01.288235 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.288332 kubelet[2785]: E0517 00:37:01.288259 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.288948 kubelet[2785]: E0517 00:37:01.288926 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.288948 kubelet[2785]: W0517 00:37:01.288945 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.289197 kubelet[2785]: E0517 00:37:01.288966 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.289422 kubelet[2785]: E0517 00:37:01.289403 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.289520 kubelet[2785]: W0517 00:37:01.289429 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.289575 kubelet[2785]: E0517 00:37:01.289540 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.289720 kubelet[2785]: E0517 00:37:01.289696 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.289720 kubelet[2785]: W0517 00:37:01.289711 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.289842 kubelet[2785]: E0517 00:37:01.289806 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.289978 kubelet[2785]: E0517 00:37:01.289964 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.290122 kubelet[2785]: W0517 00:37:01.289978 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.290122 kubelet[2785]: E0517 00:37:01.290072 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.290249 kubelet[2785]: E0517 00:37:01.290227 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.290249 kubelet[2785]: W0517 00:37:01.290246 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.290372 kubelet[2785]: E0517 00:37:01.290263 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.290489 kubelet[2785]: E0517 00:37:01.290475 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.290709 kubelet[2785]: W0517 00:37:01.290490 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.290709 kubelet[2785]: E0517 00:37:01.290506 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.290981 kubelet[2785]: E0517 00:37:01.290965 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.291072 kubelet[2785]: W0517 00:37:01.290982 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.291122 kubelet[2785]: E0517 00:37:01.291075 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.291449 kubelet[2785]: E0517 00:37:01.291433 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.291449 kubelet[2785]: W0517 00:37:01.291449 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.291620 kubelet[2785]: E0517 00:37:01.291540 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.291709 kubelet[2785]: E0517 00:37:01.291695 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.291791 kubelet[2785]: W0517 00:37:01.291711 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.291872 kubelet[2785]: E0517 00:37:01.291802 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.291987 kubelet[2785]: E0517 00:37:01.291974 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.292072 kubelet[2785]: W0517 00:37:01.291988 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.292072 kubelet[2785]: E0517 00:37:01.292006 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.292238 kubelet[2785]: E0517 00:37:01.292223 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.292307 kubelet[2785]: W0517 00:37:01.292239 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.292307 kubelet[2785]: E0517 00:37:01.292257 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.292532 kubelet[2785]: E0517 00:37:01.292518 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.292613 kubelet[2785]: W0517 00:37:01.292533 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.292613 kubelet[2785]: E0517 00:37:01.292550 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.293089 kubelet[2785]: E0517 00:37:01.293074 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.293188 kubelet[2785]: W0517 00:37:01.293090 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.293263 kubelet[2785]: E0517 00:37:01.293185 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.293396 kubelet[2785]: E0517 00:37:01.293377 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.293396 kubelet[2785]: W0517 00:37:01.293392 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.293508 kubelet[2785]: E0517 00:37:01.293406 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.293665 kubelet[2785]: E0517 00:37:01.293649 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.293758 kubelet[2785]: W0517 00:37:01.293665 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.293758 kubelet[2785]: E0517 00:37:01.293682 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:01.295799 kubelet[2785]: E0517 00:37:01.295774 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:01.295799 kubelet[2785]: W0517 00:37:01.295797 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:01.296073 kubelet[2785]: E0517 00:37:01.295818 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.071742 kubelet[2785]: E0517 00:37:02.071682 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:02.291093 kubelet[2785]: E0517 00:37:02.290856 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.291093 kubelet[2785]: W0517 00:37:02.290887 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.291093 kubelet[2785]: E0517 00:37:02.290938 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.313249 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.321570 kubelet[2785]: W0517 00:37:02.313280 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.313327 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.315212 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.321570 kubelet[2785]: W0517 00:37:02.315238 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.315284 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.319573 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.321570 kubelet[2785]: W0517 00:37:02.319616 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.321570 kubelet[2785]: E0517 00:37:02.319653 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.335520 kubelet[2785]: E0517 00:37:02.322485 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.338440 kubelet[2785]: W0517 00:37:02.335762 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.340362 kubelet[2785]: E0517 00:37:02.340288 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.341320 kubelet[2785]: E0517 00:37:02.341295 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.341509 kubelet[2785]: W0517 00:37:02.341487 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.344891 kubelet[2785]: E0517 00:37:02.344859 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.345486 kubelet[2785]: E0517 00:37:02.345465 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.345632 kubelet[2785]: W0517 00:37:02.345613 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.345742 kubelet[2785]: E0517 00:37:02.345728 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.371930 kubelet[2785]: E0517 00:37:02.371890 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.372243 kubelet[2785]: W0517 00:37:02.372212 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.372380 kubelet[2785]: E0517 00:37:02.372363 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.395069 kubelet[2785]: E0517 00:37:02.395029 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.409444 kubelet[2785]: W0517 00:37:02.409395 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.409719 kubelet[2785]: E0517 00:37:02.409699 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.412119 kubelet[2785]: I0517 00:37:02.412052 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68555965c-sthsz" podStartSLOduration=1.9896635649999999 podStartE2EDuration="4.412025366s" podCreationTimestamp="2025-05-17 00:36:58 +0000 UTC" firstStartedPulling="2025-05-17 00:36:58.527824166 +0000 UTC m=+19.751212904" lastFinishedPulling="2025-05-17 00:37:00.950185961 +0000 UTC m=+22.173574705" observedRunningTime="2025-05-17 00:37:01.219077281 +0000 UTC m=+22.442466027" watchObservedRunningTime="2025-05-17 00:37:02.412025366 +0000 UTC m=+23.635414114" May 17 00:37:02.417175 kubelet[2785]: E0517 00:37:02.417145 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.418527 kubelet[2785]: W0517 00:37:02.417396 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.418527 kubelet[2785]: E0517 00:37:02.417432 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.418998 kubelet[2785]: E0517 00:37:02.418979 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.422188 kubelet[2785]: W0517 00:37:02.422131 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.422472 kubelet[2785]: E0517 00:37:02.422450 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.425694 kubelet[2785]: E0517 00:37:02.425664 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.426615 kubelet[2785]: W0517 00:37:02.426210 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.427245 kubelet[2785]: E0517 00:37:02.427221 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.428467 kubelet[2785]: E0517 00:37:02.428449 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.439747 kubelet[2785]: W0517 00:37:02.435023 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.441393 kubelet[2785]: E0517 00:37:02.441352 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.442262 kubelet[2785]: E0517 00:37:02.442240 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.442431 kubelet[2785]: W0517 00:37:02.442412 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.442554 kubelet[2785]: E0517 00:37:02.442538 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.442968 kubelet[2785]: E0517 00:37:02.442955 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.443092 kubelet[2785]: W0517 00:37:02.443077 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.443197 kubelet[2785]: E0517 00:37:02.443184 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.448014 kubelet[2785]: E0517 00:37:02.447979 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.448334 kubelet[2785]: W0517 00:37:02.448306 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.448457 kubelet[2785]: E0517 00:37:02.448443 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.449134 kubelet[2785]: E0517 00:37:02.449117 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.449280 kubelet[2785]: W0517 00:37:02.449265 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.449369 kubelet[2785]: E0517 00:37:02.449355 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.449759 kubelet[2785]: E0517 00:37:02.449745 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.449884 kubelet[2785]: W0517 00:37:02.449869 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.449970 kubelet[2785]: E0517 00:37:02.449959 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.451129 kubelet[2785]: E0517 00:37:02.451113 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.452457 kubelet[2785]: W0517 00:37:02.452432 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.452601 kubelet[2785]: E0517 00:37:02.452586 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.453085 kubelet[2785]: E0517 00:37:02.453070 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.453201 kubelet[2785]: W0517 00:37:02.453187 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.453294 kubelet[2785]: E0517 00:37:02.453282 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.453629 kubelet[2785]: E0517 00:37:02.453616 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.453732 kubelet[2785]: W0517 00:37:02.453718 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.453826 kubelet[2785]: E0517 00:37:02.453812 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.454178 kubelet[2785]: E0517 00:37:02.454167 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.455583 kubelet[2785]: W0517 00:37:02.455556 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.455710 kubelet[2785]: E0517 00:37:02.455697 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.458046 kubelet[2785]: E0517 00:37:02.458023 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.458352 kubelet[2785]: W0517 00:37:02.458327 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.458469 kubelet[2785]: E0517 00:37:02.458453 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.458907 kubelet[2785]: E0517 00:37:02.458891 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.459019 kubelet[2785]: W0517 00:37:02.459004 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.459136 kubelet[2785]: E0517 00:37:02.459121 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.459480 kubelet[2785]: E0517 00:37:02.459468 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.459682 kubelet[2785]: W0517 00:37:02.459666 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.460769 kubelet[2785]: E0517 00:37:02.460749 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.468989 kubelet[2785]: E0517 00:37:02.468955 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.469221 kubelet[2785]: W0517 00:37:02.469198 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.469347 kubelet[2785]: E0517 00:37:02.469331 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.469763 kubelet[2785]: E0517 00:37:02.469747 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.469884 kubelet[2785]: W0517 00:37:02.469870 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.469980 kubelet[2785]: E0517 00:37:02.469966 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.470562 kubelet[2785]: E0517 00:37:02.470548 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.470665 kubelet[2785]: W0517 00:37:02.470652 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.470748 kubelet[2785]: E0517 00:37:02.470735 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.472635 kubelet[2785]: E0517 00:37:02.472616 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.472801 kubelet[2785]: W0517 00:37:02.472785 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.472921 kubelet[2785]: E0517 00:37:02.472906 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.507021 kubelet[2785]: E0517 00:37:02.506991 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.507245 kubelet[2785]: W0517 00:37:02.507225 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.507345 kubelet[2785]: E0517 00:37:02.507330 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.509931 kubelet[2785]: E0517 00:37:02.509897 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.510233 kubelet[2785]: W0517 00:37:02.510211 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.510352 kubelet[2785]: E0517 00:37:02.510337 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.511000 kubelet[2785]: E0517 00:37:02.510981 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.511127 kubelet[2785]: W0517 00:37:02.511113 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.515632 kubelet[2785]: E0517 00:37:02.515585 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.516899 kubelet[2785]: E0517 00:37:02.516872 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:02.523332 kubelet[2785]: W0517 00:37:02.522656 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:02.524028 kubelet[2785]: E0517 00:37:02.523995 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:02.672899 kernel: audit: type=1325 audit(1747442222.635:298): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:02.673091 kernel: audit: type=1300 audit(1747442222.635:298): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4e23f5c0 a2=0 a3=7fff4e23f5ac items=0 ppid=2892 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:02.635000 audit[3629]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:02.635000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4e23f5c0 a2=0 a3=7fff4e23f5ac items=0 ppid=2892 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:02.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:02.749868 kernel: audit: type=1327 audit(1747442222.635:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:02.720000 audit[3629]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:02.789478 kernel: audit: type=1325 audit(1747442222.720:299): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:02.720000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff4e23f5c0 a2=0 a3=7fff4e23f5ac items=0 ppid=2892 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:02.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:03.305277 env[1842]: time="2025-05-17T00:37:03.305232171Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:03.308799 env[1842]: time="2025-05-17T00:37:03.308752788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:03.311912 env[1842]: time="2025-05-17T00:37:03.311866048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:03.315289 env[1842]: time="2025-05-17T00:37:03.315244218Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:03.315897 env[1842]: time="2025-05-17T00:37:03.315856614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:37:03.321464 env[1842]: time="2025-05-17T00:37:03.321408033Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:37:03.322485 kubelet[2785]: E0517 00:37:03.322140 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.322485 kubelet[2785]: W0517 00:37:03.322176 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.322485 kubelet[2785]: E0517 00:37:03.322202 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.322788 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.324731 kubelet[2785]: W0517 00:37:03.322803 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.322824 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.323090 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.324731 kubelet[2785]: W0517 00:37:03.323101 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.323118 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.323348 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.324731 kubelet[2785]: W0517 00:37:03.323388 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.323405 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.324731 kubelet[2785]: E0517 00:37:03.323703 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.325248 kubelet[2785]: W0517 00:37:03.323714 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.325248 kubelet[2785]: E0517 00:37:03.323727 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.325248 kubelet[2785]: E0517 00:37:03.323958 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.325248 kubelet[2785]: W0517 00:37:03.323968 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.325248 kubelet[2785]: E0517 00:37:03.323980 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.325248 kubelet[2785]: E0517 00:37:03.324209 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.325504 kubelet[2785]: W0517 00:37:03.325359 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.325504 kubelet[2785]: E0517 00:37:03.325387 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.325760 kubelet[2785]: E0517 00:37:03.325744 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.325879 kubelet[2785]: W0517 00:37:03.325760 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.325879 kubelet[2785]: E0517 00:37:03.325776 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.326642 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329217 kubelet[2785]: W0517 00:37:03.326657 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.326673 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.327406 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329217 kubelet[2785]: W0517 00:37:03.327422 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.327437 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.327660 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329217 kubelet[2785]: W0517 00:37:03.327671 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.327683 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329217 kubelet[2785]: E0517 00:37:03.327910 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329729 kubelet[2785]: W0517 00:37:03.327920 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.327935 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.328226 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329729 kubelet[2785]: W0517 00:37:03.328237 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.328249 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.328454 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329729 kubelet[2785]: W0517 00:37:03.328463 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.328474 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.329729 kubelet[2785]: E0517 00:37:03.328712 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.329729 kubelet[2785]: W0517 00:37:03.328721 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.330180 kubelet[2785]: E0517 00:37:03.328733 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.330180 kubelet[2785]: E0517 00:37:03.329038 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.330180 kubelet[2785]: W0517 00:37:03.329049 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.330180 kubelet[2785]: E0517 00:37:03.329062 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331106 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333089 kubelet[2785]: W0517 00:37:03.331124 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331149 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331515 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333089 kubelet[2785]: W0517 00:37:03.331527 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331546 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331804 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333089 kubelet[2785]: W0517 00:37:03.331815 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.331898 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333089 kubelet[2785]: E0517 00:37:03.332127 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333615 kubelet[2785]: W0517 00:37:03.332138 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332226 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332395 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333615 kubelet[2785]: W0517 00:37:03.332405 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332505 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332650 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333615 kubelet[2785]: W0517 00:37:03.332659 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332748 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.333615 kubelet[2785]: E0517 00:37:03.332915 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.333615 kubelet[2785]: W0517 00:37:03.332939 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.334049 kubelet[2785]: E0517 00:37:03.332963 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.334638 kubelet[2785]: E0517 00:37:03.334136 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.334638 kubelet[2785]: W0517 00:37:03.334147 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.334638 kubelet[2785]: E0517 00:37:03.334238 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.334638 kubelet[2785]: E0517 00:37:03.334517 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.334638 kubelet[2785]: W0517 00:37:03.334524 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.334638 kubelet[2785]: E0517 00:37:03.334608 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.336181 kubelet[2785]: E0517 00:37:03.335725 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.336181 kubelet[2785]: W0517 00:37:03.335748 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.336181 kubelet[2785]: E0517 00:37:03.335922 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.336181 kubelet[2785]: E0517 00:37:03.336089 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.336181 kubelet[2785]: W0517 00:37:03.336099 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.336597 kubelet[2785]: E0517 00:37:03.336455 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.336745 kubelet[2785]: E0517 00:37:03.336735 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.336880 kubelet[2785]: W0517 00:37:03.336865 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.337044 kubelet[2785]: E0517 00:37:03.337031 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.337273 kubelet[2785]: E0517 00:37:03.337261 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.337369 kubelet[2785]: W0517 00:37:03.337356 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.337460 kubelet[2785]: E0517 00:37:03.337448 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.337782 kubelet[2785]: E0517 00:37:03.337769 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.337907 kubelet[2785]: W0517 00:37:03.337893 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.337996 kubelet[2785]: E0517 00:37:03.337984 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.354364 kubelet[2785]: E0517 00:37:03.352114 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.354364 kubelet[2785]: W0517 00:37:03.352148 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.354364 kubelet[2785]: E0517 00:37:03.352214 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.354364 kubelet[2785]: E0517 00:37:03.353373 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.354364 kubelet[2785]: W0517 00:37:03.353391 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.354364 kubelet[2785]: E0517 00:37:03.353439 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.374134 kubelet[2785]: E0517 00:37:03.371170 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:37:03.374134 kubelet[2785]: W0517 00:37:03.371246 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:37:03.374134 kubelet[2785]: E0517 00:37:03.371282 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:37:03.374389 env[1842]: time="2025-05-17T00:37:03.372154138Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0\"" May 17 00:37:03.375188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261821249.mount: Deactivated successfully. May 17 00:37:03.376772 env[1842]: time="2025-05-17T00:37:03.375355173Z" level=info msg="StartContainer for \"c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0\"" May 17 00:37:03.528298 env[1842]: time="2025-05-17T00:37:03.528228366Z" level=info msg="StartContainer for \"c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0\" returns successfully" May 17 00:37:03.597167 env[1842]: time="2025-05-17T00:37:03.597004310Z" level=info msg="shim disconnected" id=c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0 May 17 00:37:03.597167 env[1842]: time="2025-05-17T00:37:03.597072026Z" level=warning msg="cleaning up after shim disconnected" id=c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0 namespace=k8s.io May 17 00:37:03.597167 env[1842]: time="2025-05-17T00:37:03.597087702Z" level=info msg="cleaning up dead shim" May 17 00:37:03.614400 env[1842]: time="2025-05-17T00:37:03.614329236Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" May 17 00:37:04.071493 kubelet[2785]: E0517 00:37:04.071435 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:04.255826 env[1842]: time="2025-05-17T00:37:04.255766325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:37:04.360145 systemd[1]: run-containerd-runc-k8s.io-c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0-runc.swaipv.mount: Deactivated successfully. May 17 00:37:04.364331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c539a8c236e697380904b162ee1f95fa2855e15a98a0ef66e113eed5fbeed7d0-rootfs.mount: Deactivated successfully. May 17 00:37:06.072465 kubelet[2785]: E0517 00:37:06.072430 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:07.445308 env[1842]: time="2025-05-17T00:37:07.445245039Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:07.447863 env[1842]: time="2025-05-17T00:37:07.447790735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:07.450094 env[1842]: time="2025-05-17T00:37:07.450036128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:07.451615 env[1842]: time="2025-05-17T00:37:07.451577463Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:07.452319 env[1842]: time="2025-05-17T00:37:07.452281641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:37:07.461322 env[1842]: time="2025-05-17T00:37:07.461277121Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:37:07.478185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280632810.mount: Deactivated successfully. May 17 00:37:07.487407 env[1842]: time="2025-05-17T00:37:07.487343099Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe\"" May 17 00:37:07.488591 env[1842]: time="2025-05-17T00:37:07.488435317Z" level=info msg="StartContainer for \"d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe\"" May 17 00:37:07.631372 env[1842]: time="2025-05-17T00:37:07.631318711Z" level=info msg="StartContainer for \"d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe\" returns successfully" May 17 00:37:08.072792 kubelet[2785]: E0517 00:37:08.072201 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:10.072178 kubelet[2785]: E0517 00:37:10.072110 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:11.090191 env[1842]: time="2025-05-17T00:37:11.090115660Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:37:11.119290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe-rootfs.mount: Deactivated successfully. May 17 00:37:11.128935 env[1842]: time="2025-05-17T00:37:11.128887866Z" level=info msg="shim disconnected" id=d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe May 17 00:37:11.128935 env[1842]: time="2025-05-17T00:37:11.128936567Z" level=warning msg="cleaning up after shim disconnected" id=d1ec38e03d0b29dd4f7cddd3c3c73abfc3e4f6ce0785bb475c869dc8d39ebdfe namespace=k8s.io May 17 00:37:11.129181 env[1842]: time="2025-05-17T00:37:11.128950742Z" level=info msg="cleaning up dead shim" May 17 00:37:11.137698 kubelet[2785]: I0517 00:37:11.137643 2785 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:37:11.144371 env[1842]: time="2025-05-17T00:37:11.144303671Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3782 runtime=io.containerd.runc.v2\n" May 17 00:37:11.266405 env[1842]: time="2025-05-17T00:37:11.266163317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:37:11.307281 kubelet[2785]: I0517 00:37:11.307229 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjggb\" (UniqueName: \"kubernetes.io/projected/0739d8f4-ba25-4a9e-a618-fc129948c71f-kube-api-access-kjggb\") pod \"whisker-5669ccd8b7-dh98d\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " pod="calico-system/whisker-5669ccd8b7-dh98d" May 17 00:37:11.307509 kubelet[2785]: I0517 00:37:11.307495 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r697m\" (UniqueName: \"kubernetes.io/projected/e86c1943-5287-43a1-8c5c-dfb67368014d-kube-api-access-r697m\") pod \"calico-apiserver-74446499d9-stgnw\" (UID: \"e86c1943-5287-43a1-8c5c-dfb67368014d\") " pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" May 17 00:37:11.307631 kubelet[2785]: I0517 00:37:11.307620 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwxt\" (UniqueName: \"kubernetes.io/projected/d4a0e73a-3af9-4fe3-8741-031055e915ab-kube-api-access-xwwxt\") pod \"goldmane-8f77d7b6c-m25cz\" (UID: \"d4a0e73a-3af9-4fe3-8741-031055e915ab\") " pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:11.307742 kubelet[2785]: I0517 00:37:11.307729 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10bbdf63-9286-463c-aba2-5278ed8400da-tigera-ca-bundle\") pod \"calico-kube-controllers-7487d56f97-mnmbq\" (UID: \"10bbdf63-9286-463c-aba2-5278ed8400da\") " pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" May 17 00:37:11.307874 kubelet[2785]: I0517 00:37:11.307863 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a0e73a-3af9-4fe3-8741-031055e915ab-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-m25cz\" (UID: \"d4a0e73a-3af9-4fe3-8741-031055e915ab\") " pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:11.307997 kubelet[2785]: I0517 00:37:11.307983 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a0e73a-3af9-4fe3-8741-031055e915ab-config\") pod \"goldmane-8f77d7b6c-m25cz\" (UID: \"d4a0e73a-3af9-4fe3-8741-031055e915ab\") " pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:11.308112 kubelet[2785]: I0517 00:37:11.308098 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ad07685-9392-4209-bbe3-44ad549c6102-config-volume\") pod \"coredns-7c65d6cfc9-q95gq\" (UID: \"8ad07685-9392-4209-bbe3-44ad549c6102\") " pod="kube-system/coredns-7c65d6cfc9-q95gq" May 17 00:37:11.308225 kubelet[2785]: I0517 00:37:11.308214 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce-config-volume\") pod \"coredns-7c65d6cfc9-pvwnn\" (UID: \"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce\") " pod="kube-system/coredns-7c65d6cfc9-pvwnn" May 17 00:37:11.308332 kubelet[2785]: I0517 00:37:11.308322 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8lx\" (UniqueName: \"kubernetes.io/projected/cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce-kube-api-access-hn8lx\") pod \"coredns-7c65d6cfc9-pvwnn\" (UID: \"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce\") " pod="kube-system/coredns-7c65d6cfc9-pvwnn" May 17 00:37:11.308433 kubelet[2785]: I0517 00:37:11.308423 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-ca-bundle\") pod \"whisker-5669ccd8b7-dh98d\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " pod="calico-system/whisker-5669ccd8b7-dh98d" May 17 00:37:11.308549 kubelet[2785]: I0517 00:37:11.308537 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nc4\" (UniqueName: \"kubernetes.io/projected/10bbdf63-9286-463c-aba2-5278ed8400da-kube-api-access-58nc4\") pod \"calico-kube-controllers-7487d56f97-mnmbq\" (UID: \"10bbdf63-9286-463c-aba2-5278ed8400da\") " pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" May 17 00:37:11.308646 kubelet[2785]: I0517 00:37:11.308637 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d4a0e73a-3af9-4fe3-8741-031055e915ab-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-m25cz\" (UID: \"d4a0e73a-3af9-4fe3-8741-031055e915ab\") " pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:11.308746 kubelet[2785]: I0517 00:37:11.308737 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e32dedf8-419f-45be-b1fc-aa56e4bf91b3-calico-apiserver-certs\") pod \"calico-apiserver-74446499d9-7jqmj\" (UID: \"e32dedf8-419f-45be-b1fc-aa56e4bf91b3\") " pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" May 17 00:37:11.308893 kubelet[2785]: I0517 00:37:11.308881 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9chh\" (UniqueName: \"kubernetes.io/projected/e32dedf8-419f-45be-b1fc-aa56e4bf91b3-kube-api-access-b9chh\") pod \"calico-apiserver-74446499d9-7jqmj\" (UID: \"e32dedf8-419f-45be-b1fc-aa56e4bf91b3\") " pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" May 17 00:37:11.308988 kubelet[2785]: I0517 00:37:11.308978 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-backend-key-pair\") pod \"whisker-5669ccd8b7-dh98d\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " pod="calico-system/whisker-5669ccd8b7-dh98d" May 17 00:37:11.309063 kubelet[2785]: I0517 00:37:11.309053 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e86c1943-5287-43a1-8c5c-dfb67368014d-calico-apiserver-certs\") pod \"calico-apiserver-74446499d9-stgnw\" (UID: \"e86c1943-5287-43a1-8c5c-dfb67368014d\") " pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" May 17 00:37:11.309130 kubelet[2785]: I0517 00:37:11.309121 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfp6\" (UniqueName: \"kubernetes.io/projected/8ad07685-9392-4209-bbe3-44ad549c6102-kube-api-access-7qfp6\") pod \"coredns-7c65d6cfc9-q95gq\" (UID: \"8ad07685-9392-4209-bbe3-44ad549c6102\") " pod="kube-system/coredns-7c65d6cfc9-q95gq" May 17 00:37:11.483377 env[1842]: time="2025-05-17T00:37:11.482962380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95gq,Uid:8ad07685-9392-4209-bbe3-44ad549c6102,Namespace:kube-system,Attempt:0,}" May 17 00:37:11.496730 env[1842]: time="2025-05-17T00:37:11.496681599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7487d56f97-mnmbq,Uid:10bbdf63-9286-463c-aba2-5278ed8400da,Namespace:calico-system,Attempt:0,}" May 17 00:37:11.497526 env[1842]: time="2025-05-17T00:37:11.497489924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5669ccd8b7-dh98d,Uid:0739d8f4-ba25-4a9e-a618-fc129948c71f,Namespace:calico-system,Attempt:0,}" May 17 00:37:11.504540 env[1842]: time="2025-05-17T00:37:11.504490731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-stgnw,Uid:e86c1943-5287-43a1-8c5c-dfb67368014d,Namespace:calico-apiserver,Attempt:0,}" May 17 00:37:11.520764 env[1842]: time="2025-05-17T00:37:11.520703444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-m25cz,Uid:d4a0e73a-3af9-4fe3-8741-031055e915ab,Namespace:calico-system,Attempt:0,}" May 17 00:37:11.528630 env[1842]: time="2025-05-17T00:37:11.528580400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-7jqmj,Uid:e32dedf8-419f-45be-b1fc-aa56e4bf91b3,Namespace:calico-apiserver,Attempt:0,}" May 17 00:37:11.529498 env[1842]: time="2025-05-17T00:37:11.529444237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvwnn,Uid:cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce,Namespace:kube-system,Attempt:0,}" May 17 00:37:11.903072 env[1842]: time="2025-05-17T00:37:11.902987787Z" level=error msg="Failed to destroy network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.903861 env[1842]: time="2025-05-17T00:37:11.903797017Z" level=error msg="encountered an error cleaning up failed sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.904104 env[1842]: time="2025-05-17T00:37:11.904056877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7487d56f97-mnmbq,Uid:10bbdf63-9286-463c-aba2-5278ed8400da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.911565 kubelet[2785]: E0517 00:37:11.904528 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.911565 kubelet[2785]: E0517 00:37:11.911260 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" May 17 00:37:11.911565 kubelet[2785]: E0517 00:37:11.911299 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" May 17 00:37:11.911932 kubelet[2785]: E0517 00:37:11.911399 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7487d56f97-mnmbq_calico-system(10bbdf63-9286-463c-aba2-5278ed8400da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7487d56f97-mnmbq_calico-system(10bbdf63-9286-463c-aba2-5278ed8400da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" podUID="10bbdf63-9286-463c-aba2-5278ed8400da" May 17 00:37:11.939231 env[1842]: time="2025-05-17T00:37:11.939160380Z" level=error msg="Failed to destroy network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.939599 env[1842]: time="2025-05-17T00:37:11.939556628Z" level=error msg="encountered an error cleaning up failed sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.939703 env[1842]: time="2025-05-17T00:37:11.939629667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvwnn,Uid:cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.939959 kubelet[2785]: E0517 00:37:11.939908 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.940073 kubelet[2785]: E0517 00:37:11.939984 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pvwnn" May 17 00:37:11.940073 kubelet[2785]: E0517 00:37:11.940014 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pvwnn" May 17 00:37:11.940174 kubelet[2785]: E0517 00:37:11.940073 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pvwnn_kube-system(cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pvwnn_kube-system(cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pvwnn" podUID="cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce" May 17 00:37:11.976260 env[1842]: time="2025-05-17T00:37:11.976200054Z" level=error msg="Failed to destroy network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.976643 env[1842]: time="2025-05-17T00:37:11.976603322Z" level=error msg="encountered an error cleaning up failed sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.976821 env[1842]: time="2025-05-17T00:37:11.976666758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5669ccd8b7-dh98d,Uid:0739d8f4-ba25-4a9e-a618-fc129948c71f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.976988 kubelet[2785]: E0517 00:37:11.976947 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:11.977075 kubelet[2785]: E0517 00:37:11.977024 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5669ccd8b7-dh98d" May 17 00:37:11.977075 kubelet[2785]: E0517 00:37:11.977053 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5669ccd8b7-dh98d" May 17 00:37:11.977168 kubelet[2785]: E0517 00:37:11.977114 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5669ccd8b7-dh98d_calico-system(0739d8f4-ba25-4a9e-a618-fc129948c71f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5669ccd8b7-dh98d_calico-system(0739d8f4-ba25-4a9e-a618-fc129948c71f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5669ccd8b7-dh98d" podUID="0739d8f4-ba25-4a9e-a618-fc129948c71f" May 17 00:37:12.001048 env[1842]: time="2025-05-17T00:37:12.000980617Z" level=error msg="Failed to destroy network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.001649 env[1842]: time="2025-05-17T00:37:12.001600838Z" level=error msg="encountered an error cleaning up failed sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.001898 env[1842]: time="2025-05-17T00:37:12.001861490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-stgnw,Uid:e86c1943-5287-43a1-8c5c-dfb67368014d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.003480 kubelet[2785]: E0517 00:37:12.002294 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.003480 kubelet[2785]: E0517 00:37:12.002367 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" May 17 00:37:12.003480 kubelet[2785]: E0517 00:37:12.002395 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" May 17 00:37:12.003722 kubelet[2785]: E0517 00:37:12.002458 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74446499d9-stgnw_calico-apiserver(e86c1943-5287-43a1-8c5c-dfb67368014d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74446499d9-stgnw_calico-apiserver(e86c1943-5287-43a1-8c5c-dfb67368014d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" podUID="e86c1943-5287-43a1-8c5c-dfb67368014d" May 17 00:37:12.017559 env[1842]: time="2025-05-17T00:37:12.017491364Z" level=error msg="Failed to destroy network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.017963 env[1842]: time="2025-05-17T00:37:12.017909920Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.018076 env[1842]: time="2025-05-17T00:37:12.017973085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95gq,Uid:8ad07685-9392-4209-bbe3-44ad549c6102,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.018773 kubelet[2785]: E0517 00:37:12.018289 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.018773 kubelet[2785]: E0517 00:37:12.018367 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q95gq" May 17 00:37:12.018773 kubelet[2785]: E0517 00:37:12.018393 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q95gq" May 17 00:37:12.018988 kubelet[2785]: E0517 00:37:12.018457 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q95gq_kube-system(8ad07685-9392-4209-bbe3-44ad549c6102)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q95gq_kube-system(8ad07685-9392-4209-bbe3-44ad549c6102)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q95gq" podUID="8ad07685-9392-4209-bbe3-44ad549c6102" May 17 00:37:12.019217 env[1842]: time="2025-05-17T00:37:12.019174776Z" level=error msg="Failed to destroy network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.019891 env[1842]: time="2025-05-17T00:37:12.019817508Z" level=error msg="encountered an error cleaning up failed sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.020084 env[1842]: time="2025-05-17T00:37:12.020040602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-7jqmj,Uid:e32dedf8-419f-45be-b1fc-aa56e4bf91b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.020777 kubelet[2785]: E0517 00:37:12.020400 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.020777 kubelet[2785]: E0517 00:37:12.020612 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" May 17 00:37:12.020777 kubelet[2785]: E0517 00:37:12.020649 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" May 17 00:37:12.021040 kubelet[2785]: E0517 00:37:12.020699 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74446499d9-7jqmj_calico-apiserver(e32dedf8-419f-45be-b1fc-aa56e4bf91b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74446499d9-7jqmj_calico-apiserver(e32dedf8-419f-45be-b1fc-aa56e4bf91b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" podUID="e32dedf8-419f-45be-b1fc-aa56e4bf91b3" May 17 00:37:12.025183 env[1842]: time="2025-05-17T00:37:12.025111789Z" level=error msg="Failed to destroy network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.025642 env[1842]: time="2025-05-17T00:37:12.025592891Z" level=error msg="encountered an error cleaning up failed sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.025758 env[1842]: time="2025-05-17T00:37:12.025710277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-m25cz,Uid:d4a0e73a-3af9-4fe3-8741-031055e915ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.026009 kubelet[2785]: E0517 00:37:12.025968 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.026114 kubelet[2785]: E0517 00:37:12.026050 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:12.026114 kubelet[2785]: E0517 00:37:12.026078 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-m25cz" May 17 00:37:12.026217 kubelet[2785]: E0517 00:37:12.026156 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:12.074501 env[1842]: time="2025-05-17T00:37:12.074453719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z72kr,Uid:7d455070-45c6-475a-be66-925b4a2071bc,Namespace:calico-system,Attempt:0,}" May 17 00:37:12.165592 env[1842]: time="2025-05-17T00:37:12.165466151Z" level=error msg="Failed to destroy network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.166937 env[1842]: time="2025-05-17T00:37:12.166883146Z" level=error msg="encountered an error cleaning up failed sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.167126 env[1842]: time="2025-05-17T00:37:12.167090126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z72kr,Uid:7d455070-45c6-475a-be66-925b4a2071bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.167602 kubelet[2785]: E0517 00:37:12.167454 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.168535 kubelet[2785]: E0517 00:37:12.168101 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z72kr" May 17 00:37:12.168535 kubelet[2785]: E0517 00:37:12.168153 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z72kr" May 17 00:37:12.168535 kubelet[2785]: E0517 00:37:12.168206 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z72kr_calico-system(7d455070-45c6-475a-be66-925b4a2071bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z72kr_calico-system(7d455070-45c6-475a-be66-925b4a2071bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:12.174552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0-shm.mount: Deactivated successfully. May 17 00:37:12.268954 kubelet[2785]: I0517 00:37:12.268917 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:12.270822 kubelet[2785]: I0517 00:37:12.270737 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:12.280588 kubelet[2785]: I0517 00:37:12.280168 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:12.282070 kubelet[2785]: I0517 00:37:12.282045 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:12.284876 kubelet[2785]: I0517 00:37:12.284853 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:12.286396 kubelet[2785]: I0517 00:37:12.286328 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:12.287869 kubelet[2785]: I0517 00:37:12.287649 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:12.293463 kubelet[2785]: I0517 00:37:12.293332 2785 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:12.295214 env[1842]: time="2025-05-17T00:37:12.295171289Z" level=info msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" May 17 00:37:12.297409 env[1842]: time="2025-05-17T00:37:12.295209060Z" level=info msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" May 17 00:37:12.298350 env[1842]: time="2025-05-17T00:37:12.295249202Z" level=info msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" May 17 00:37:12.298971 env[1842]: time="2025-05-17T00:37:12.295281952Z" level=info msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" May 17 00:37:12.299949 env[1842]: time="2025-05-17T00:37:12.295341929Z" level=info msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" May 17 00:37:12.300748 env[1842]: time="2025-05-17T00:37:12.295372746Z" level=info msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" May 17 00:37:12.301336 env[1842]: time="2025-05-17T00:37:12.295428664Z" level=info msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" May 17 00:37:12.301768 env[1842]: time="2025-05-17T00:37:12.295479159Z" level=info msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" May 17 00:37:12.518410 env[1842]: time="2025-05-17T00:37:12.505920593Z" level=error msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" failed" error="failed to destroy network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.525100 env[1842]: time="2025-05-17T00:37:12.525027009Z" level=error msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" failed" error="failed to destroy network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.525288 kubelet[2785]: E0517 00:37:12.525050 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:12.526120 kubelet[2785]: E0517 00:37:12.526070 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:12.528691 kubelet[2785]: E0517 00:37:12.528616 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927"} May 17 00:37:12.528813 kubelet[2785]: E0517 00:37:12.528720 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ad07685-9392-4209-bbe3-44ad549c6102\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.528813 kubelet[2785]: E0517 00:37:12.528768 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ad07685-9392-4209-bbe3-44ad549c6102\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q95gq" podUID="8ad07685-9392-4209-bbe3-44ad549c6102" May 17 00:37:12.528813 kubelet[2785]: E0517 00:37:12.528673 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef"} May 17 00:37:12.529202 kubelet[2785]: E0517 00:37:12.528855 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.529202 kubelet[2785]: E0517 00:37:12.528880 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pvwnn" podUID="cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce" May 17 00:37:12.591482 env[1842]: time="2025-05-17T00:37:12.591406549Z" level=error msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" failed" error="failed to destroy network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.592257 kubelet[2785]: E0517 00:37:12.591981 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:12.592257 kubelet[2785]: E0517 00:37:12.592064 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430"} May 17 00:37:12.592257 kubelet[2785]: E0517 00:37:12.592116 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0739d8f4-ba25-4a9e-a618-fc129948c71f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.596098 kubelet[2785]: E0517 00:37:12.592149 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0739d8f4-ba25-4a9e-a618-fc129948c71f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5669ccd8b7-dh98d" podUID="0739d8f4-ba25-4a9e-a618-fc129948c71f" May 17 00:37:12.612584 env[1842]: time="2025-05-17T00:37:12.612514456Z" level=error msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" failed" error="failed to destroy network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.613392 kubelet[2785]: E0517 00:37:12.613025 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:12.613392 kubelet[2785]: E0517 00:37:12.613095 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0"} May 17 00:37:12.613392 kubelet[2785]: E0517 00:37:12.613143 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d455070-45c6-475a-be66-925b4a2071bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.613392 kubelet[2785]: E0517 00:37:12.613248 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d455070-45c6-475a-be66-925b4a2071bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z72kr" podUID="7d455070-45c6-475a-be66-925b4a2071bc" May 17 00:37:12.614575 env[1842]: time="2025-05-17T00:37:12.614517545Z" level=error msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" failed" error="failed to destroy network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.615126 kubelet[2785]: E0517 00:37:12.614943 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:12.615126 kubelet[2785]: E0517 00:37:12.615006 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2"} May 17 00:37:12.615126 kubelet[2785]: E0517 00:37:12.615050 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4a0e73a-3af9-4fe3-8741-031055e915ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.615126 kubelet[2785]: E0517 00:37:12.615082 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4a0e73a-3af9-4fe3-8741-031055e915ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:12.628060 env[1842]: time="2025-05-17T00:37:12.627992245Z" level=error msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" failed" error="failed to destroy network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.628903 kubelet[2785]: E0517 00:37:12.628693 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:12.628903 kubelet[2785]: E0517 00:37:12.628756 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24"} May 17 00:37:12.628903 kubelet[2785]: E0517 00:37:12.628802 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10bbdf63-9286-463c-aba2-5278ed8400da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.628903 kubelet[2785]: E0517 00:37:12.628856 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10bbdf63-9286-463c-aba2-5278ed8400da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" podUID="10bbdf63-9286-463c-aba2-5278ed8400da" May 17 00:37:12.635162 env[1842]: time="2025-05-17T00:37:12.635102128Z" level=error msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" failed" error="failed to destroy network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.635742 kubelet[2785]: E0517 00:37:12.635549 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:12.635742 kubelet[2785]: E0517 00:37:12.635611 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3"} May 17 00:37:12.635742 kubelet[2785]: E0517 00:37:12.635663 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e32dedf8-419f-45be-b1fc-aa56e4bf91b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.635742 kubelet[2785]: E0517 00:37:12.635695 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e32dedf8-419f-45be-b1fc-aa56e4bf91b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" podUID="e32dedf8-419f-45be-b1fc-aa56e4bf91b3" May 17 00:37:12.650057 env[1842]: time="2025-05-17T00:37:12.649986849Z" level=error msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" failed" error="failed to destroy network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:37:12.650328 kubelet[2785]: E0517 00:37:12.650255 2785 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:12.650431 kubelet[2785]: E0517 00:37:12.650348 2785 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3"} May 17 00:37:12.650431 kubelet[2785]: E0517 00:37:12.650390 2785 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e86c1943-5287-43a1-8c5c-dfb67368014d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:37:12.650576 kubelet[2785]: E0517 00:37:12.650422 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e86c1943-5287-43a1-8c5c-dfb67368014d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" podUID="e86c1943-5287-43a1-8c5c-dfb67368014d" May 17 00:37:17.757238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257417587.mount: Deactivated successfully. May 17 00:37:17.803310 env[1842]: time="2025-05-17T00:37:17.803195685Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:17.806060 env[1842]: time="2025-05-17T00:37:17.806023242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:17.807713 env[1842]: time="2025-05-17T00:37:17.807677606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:17.809184 env[1842]: time="2025-05-17T00:37:17.809141341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:17.850115 env[1842]: time="2025-05-17T00:37:17.813275282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:37:17.905337 env[1842]: time="2025-05-17T00:37:17.905251275Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:37:17.978540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597228837.mount: Deactivated successfully. May 17 00:37:17.988789 env[1842]: time="2025-05-17T00:37:17.988697761Z" level=info msg="CreateContainer within sandbox \"5b0ee588d1361ad16ae5f87ff2e52158f4ac290949ee718631321212a2e4cf1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c\"" May 17 00:37:17.989280 env[1842]: time="2025-05-17T00:37:17.989240762Z" level=info msg="StartContainer for \"6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c\"" May 17 00:37:18.068256 env[1842]: time="2025-05-17T00:37:18.068200282Z" level=info msg="StartContainer for \"6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c\" returns successfully" May 17 00:37:18.396020 kubelet[2785]: I0517 00:37:18.391044 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9l9qz" podStartSLOduration=1.5611120760000001 podStartE2EDuration="20.382220971s" podCreationTimestamp="2025-05-17 00:36:58 +0000 UTC" firstStartedPulling="2025-05-17 00:36:58.99596063 +0000 UTC m=+20.219349356" lastFinishedPulling="2025-05-17 00:37:17.817069524 +0000 UTC m=+39.040458251" observedRunningTime="2025-05-17 00:37:18.381436636 +0000 UTC m=+39.604825382" watchObservedRunningTime="2025-05-17 00:37:18.382220971 +0000 UTC m=+39.605609719" May 17 00:37:19.438945 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:37:19.440274 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:37:19.480735 systemd[1]: run-containerd-runc-k8s.io-6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c-runc.UTBVxB.mount: Deactivated successfully. May 17 00:37:20.990319 kernel: kauditd_printk_skb: 2 callbacks suppressed May 17 00:37:20.990496 kernel: audit: type=1400 audit(1747442240.980:300): avc: denied { write } for pid=4308 comm="tee" name="fd" dev="proc" ino=25721 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:20.980000 audit[4308]: AVC avc: denied { write } for pid=4308 comm="tee" name="fd" dev="proc" ino=25721 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.029514 kernel: audit: type=1400 audit(1747442241.005:301): avc: denied { write } for pid=4316 comm="tee" name="fd" dev="proc" ino=25732 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.029632 kernel: audit: type=1300 audit(1747442241.005:301): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7d71a7df a2=241 a3=1b6 items=1 ppid=4273 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.005000 audit[4316]: AVC avc: denied { write } for pid=4316 comm="tee" name="fd" dev="proc" ino=25732 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.005000 audit[4316]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7d71a7df a2=241 a3=1b6 items=1 ppid=4273 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.005000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:37:21.045739 kernel: audit: type=1307 audit(1747442241.005:301): cwd="/etc/service/enabled/confd/log" May 17 00:37:21.045855 kernel: audit: type=1302 audit(1747442241.005:301): item=0 name="/dev/fd/63" inode=24497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.005000 audit: PATH item=0 name="/dev/fd/63" inode=24497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.053273 kernel: audit: type=1327 audit(1747442241.005:301): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:20.980000 audit[4308]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe6b707d0 a2=241 a3=1b6 items=1 ppid=4271 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.063990 kernel: audit: type=1300 audit(1747442240.980:300): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe6b707d0 a2=241 a3=1b6 items=1 ppid=4271 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:20.980000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:37:21.068923 kernel: audit: type=1307 audit(1747442240.980:300): cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:37:20.980000 audit: PATH item=0 name="/dev/fd/63" inode=24482 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:20.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.082661 kernel: audit: type=1302 audit(1747442240.980:300): item=0 name="/dev/fd/63" inode=24482 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.082806 kernel: audit: type=1327 audit(1747442240.980:300): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.016000 audit[4322]: AVC avc: denied { write } for pid=4322 comm="tee" name="fd" dev="proc" ino=25740 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.016000 audit[4322]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff8554f7e1 a2=241 a3=1b6 items=1 ppid=4269 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.016000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:37:21.016000 audit: PATH item=0 name="/dev/fd/63" inode=24502 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.068000 audit[4329]: AVC avc: denied { write } for pid=4329 comm="tee" name="fd" dev="proc" ino=24518 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.068000 audit[4329]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdd5607df a2=241 a3=1b6 items=1 ppid=4298 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.068000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:37:21.068000 audit: PATH item=0 name="/dev/fd/63" inode=25746 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.088000 audit[4331]: AVC avc: denied { write } for pid=4331 comm="tee" name="fd" dev="proc" ino=25749 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.088000 audit[4331]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff203f37df a2=241 a3=1b6 items=1 ppid=4297 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.088000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:37:21.088000 audit: PATH item=0 name="/dev/fd/63" inode=24515 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.109000 audit[4337]: AVC avc: denied { write } for pid=4337 comm="tee" name="fd" dev="proc" ino=25753 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.109000 audit[4337]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6c3507e0 a2=241 a3=1b6 items=1 ppid=4295 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.109000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:37:21.109000 audit: PATH item=0 name="/dev/fd/63" inode=24522 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.128000 audit[4344]: AVC avc: denied { write } for pid=4344 comm="tee" name="fd" dev="proc" ino=24528 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:37:21.128000 audit[4344]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0007a7cf a2=241 a3=1b6 items=1 ppid=4302 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.128000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:37:21.128000 audit: PATH item=0 name="/dev/fd/63" inode=24523 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:37:21.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:37:21.542726 env[1842]: time="2025-05-17T00:37:21.542663540Z" level=info msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" May 17 00:37:21.564000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.564000 audit: BPF prog-id=10 op=LOAD May 17 00:37:21.564000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdc320ee0 a2=98 a3=3 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.564000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.565000 audit: BPF prog-id=10 op=UNLOAD May 17 00:37:21.570000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.570000 audit: BPF prog-id=11 op=LOAD May 17 00:37:21.570000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdc320cd0 a2=94 a3=54428f items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.572000 audit: BPF prog-id=11 op=UNLOAD May 17 00:37:21.572000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.572000 audit: BPF prog-id=12 op=LOAD May 17 00:37:21.572000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdc320d00 a2=94 a3=2 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.572000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.572000 audit: BPF prog-id=12 op=UNLOAD May 17 00:37:21.790000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.790000 audit: BPF prog-id=13 op=LOAD May 17 00:37:21.790000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdc320bc0 a2=94 a3=1 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.791000 audit: BPF prog-id=13 op=UNLOAD May 17 00:37:21.791000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.791000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffdc320c90 a2=50 a3=7fffdc320d70 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320bd0 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdc320c00 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdc320b10 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320c20 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320c00 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320bf0 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320c20 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdc320c00 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdc320c20 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdc320bf0 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdc320c60 a2=28 a3=0 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffdc320a10 a2=50 a3=1 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit: BPF prog-id=14 op=LOAD May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffdc320a10 a2=94 a3=5 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit: BPF prog-id=14 op=UNLOAD May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffdc320ac0 a2=50 a3=1 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffdc320be0 a2=4 a3=38 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.804000 audit[4396]: AVC avc: denied { confidentiality } for pid=4396 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:37:21.804000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdc320c30 a2=94 a3=6 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { confidentiality } for pid=4396 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:37:21.805000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdc3203e0 a2=94 a3=88 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.805000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { perfmon } for pid=4396 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { bpf } for pid=4396 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.805000 audit[4396]: AVC avc: denied { confidentiality } for pid=4396 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:37:21.805000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdc3203e0 a2=94 a3=88 items=0 ppid=4305 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.805000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit: BPF prog-id=15 op=LOAD May 17 00:37:21.848000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc30051e80 a2=98 a3=1999999999999999 items=0 ppid=4305 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.848000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:37:21.848000 audit: BPF prog-id=15 op=UNLOAD May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit: BPF prog-id=16 op=LOAD May 17 00:37:21.848000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc30051d60 a2=94 a3=ffff items=0 ppid=4305 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.848000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:37:21.848000 audit: BPF prog-id=16 op=UNLOAD May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { perfmon } for pid=4406 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit[4406]: AVC avc: denied { bpf } for pid=4406 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.848000 audit: BPF prog-id=17 op=LOAD May 17 00:37:21.848000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc30051da0 a2=94 a3=7ffc30051f80 items=0 ppid=4305 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.848000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:37:21.848000 audit: BPF prog-id=17 op=UNLOAD May 17 00:37:21.933109 (udev-worker)[4214]: Network interface NamePolicy= disabled on kernel command line. May 17 00:37:21.942413 systemd-networkd[1517]: vxlan.calico: Link UP May 17 00:37:21.942423 systemd-networkd[1517]: vxlan.calico: Gained carrier May 17 00:37:21.993000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:21.993000 audit: BPF prog-id=18 op=LOAD May 17 00:37:21.993000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed8259ee0 a2=98 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:21.993000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:21.994000 audit: BPF prog-id=18 op=UNLOAD May 17 00:37:21.994438 (udev-worker)[4215]: Network interface NamePolicy= disabled on kernel command line. May 17 00:37:22.026000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.026000 audit: BPF prog-id=19 op=LOAD May 17 00:37:22.026000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed8259cf0 a2=94 a3=54428f items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.026000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit: BPF prog-id=19 op=UNLOAD May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit: BPF prog-id=20 op=LOAD May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed8259d20 a2=94 a3=2 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit: BPF prog-id=20 op=UNLOAD May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259bf0 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed8259c20 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed8259b30 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259c40 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259c20 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259c10 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259c40 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed8259c20 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed8259c40 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.027000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.027000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed8259c10 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffed8259c80 a2=28 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit: BPF prog-id=21 op=LOAD May 17 00:37:22.029000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffed8259af0 a2=94 a3=0 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.029000 audit: BPF prog-id=21 op=UNLOAD May 17 00:37:22.029000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.029000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffed8259ae0 a2=50 a3=2800 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffed8259ae0 a2=50 a3=2800 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.030000 audit: BPF prog-id=22 op=LOAD May 17 00:37:22.030000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffed8259300 a2=94 a3=2 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.031000 audit: BPF prog-id=22 op=UNLOAD May 17 00:37:22.031000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { perfmon } for pid=4431 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit[4431]: AVC avc: denied { bpf } for pid=4431 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.031000 audit: BPF prog-id=23 op=LOAD May 17 00:37:22.031000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffed8259400 a2=94 a3=30 items=0 ppid=4305 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.031000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.037000 audit: BPF prog-id=24 op=LOAD May 17 00:37:22.037000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd774db90 a2=98 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.037000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.038000 audit: BPF prog-id=24 op=UNLOAD May 17 00:37:22.038000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.038000 audit: BPF prog-id=25 op=LOAD May 17 00:37:22.038000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd774d980 a2=94 a3=54428f items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.039000 audit: BPF prog-id=25 op=UNLOAD May 17 00:37:22.040000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.040000 audit: BPF prog-id=26 op=LOAD May 17 00:37:22.040000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd774d9b0 a2=94 a3=2 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.040000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.040000 audit: BPF prog-id=26 op=UNLOAD May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.725 [INFO][4391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.726 [INFO][4391] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" iface="eth0" netns="/var/run/netns/cni-d4ac8ce8-103b-26a9-4852-bf200d2ad155" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.726 [INFO][4391] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" iface="eth0" netns="/var/run/netns/cni-d4ac8ce8-103b-26a9-4852-bf200d2ad155" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.728 [INFO][4391] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" iface="eth0" netns="/var/run/netns/cni-d4ac8ce8-103b-26a9-4852-bf200d2ad155" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.728 [INFO][4391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:21.728 [INFO][4391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.126 [INFO][4400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.131 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.131 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.157 [WARNING][4400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.157 [INFO][4400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.159 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:22.181127 env[1842]: 2025-05-17 00:37:22.162 [INFO][4391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:22.181127 env[1842]: time="2025-05-17T00:37:22.175099341Z" level=info msg="TearDown network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" successfully" May 17 00:37:22.181127 env[1842]: time="2025-05-17T00:37:22.175154714Z" level=info msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" returns successfully" May 17 00:37:22.178969 systemd[1]: run-netns-cni\x2dd4ac8ce8\x2d103b\x2d26a9\x2d4852\x2dbf200d2ad155.mount: Deactivated successfully. May 17 00:37:22.211000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit: BPF prog-id=27 op=LOAD May 17 00:37:22.211000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd774d870 a2=94 a3=1 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.211000 audit: BPF prog-id=27 op=UNLOAD May 17 00:37:22.211000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.211000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdd774d940 a2=50 a3=7ffdd774da20 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d880 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd774d8b0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd774d7c0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d8d0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d8b0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d8a0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d8d0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd774d8b0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd774d8d0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd774d8a0 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdd774d910 a2=28 a3=0 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdd774d6c0 a2=50 a3=1 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.223000 audit: BPF prog-id=28 op=LOAD May 17 00:37:22.223000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd774d6c0 a2=94 a3=5 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.224000 audit: BPF prog-id=28 op=UNLOAD May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdd774d770 a2=50 a3=1 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdd774d890 a2=4 a3=38 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { confidentiality } for pid=4439 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:37:22.224000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd774d8e0 a2=94 a3=6 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { confidentiality } for pid=4439 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:37:22.224000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd774d090 a2=94 a3=88 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.224000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdd774d090 a2=94 a3=88 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.225000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.225000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdd774eac0 a2=10 a3=f8f00800 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.225000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.225000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.225000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdd774e960 a2=10 a3=3 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.225000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.225000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.225000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdd774e900 a2=10 a3=3 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.225000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.225000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:37:22.225000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdd774e900 a2=10 a3=7 items=0 ppid=4305 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.225000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:37:22.232000 audit: BPF prog-id=23 op=UNLOAD May 17 00:37:22.348000 audit[4465]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:22.348000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffbdcc1b90 a2=0 a3=7fffbdcc1b7c items=0 ppid=4305 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.348000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:22.355483 kubelet[2785]: I0517 00:37:22.355254 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjggb\" (UniqueName: \"kubernetes.io/projected/0739d8f4-ba25-4a9e-a618-fc129948c71f-kube-api-access-kjggb\") pod \"0739d8f4-ba25-4a9e-a618-fc129948c71f\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " May 17 00:37:22.357586 kubelet[2785]: I0517 00:37:22.357553 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-ca-bundle\") pod \"0739d8f4-ba25-4a9e-a618-fc129948c71f\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " May 17 00:37:22.357817 kubelet[2785]: I0517 00:37:22.357799 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-backend-key-pair\") pod \"0739d8f4-ba25-4a9e-a618-fc129948c71f\" (UID: \"0739d8f4-ba25-4a9e-a618-fc129948c71f\") " May 17 00:37:22.356000 audit[4464]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:22.356000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc414034a0 a2=0 a3=7ffc4140348c items=0 ppid=4305 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.356000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:22.371000 audit[4469]: NETFILTER_CFG table=filter:103 family=2 entries=39 op=nft_register_chain pid=4469 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:22.371000 audit[4469]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffe0f706970 a2=0 a3=7ffe0f70695c items=0 ppid=4305 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.371000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:22.379826 systemd[1]: var-lib-kubelet-pods-0739d8f4\x2dba25\x2d4a9e\x2da618\x2dfc129948c71f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkjggb.mount: Deactivated successfully. May 17 00:37:22.384000 audit[4463]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:22.387407 kubelet[2785]: I0517 00:37:22.387369 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0739d8f4-ba25-4a9e-a618-fc129948c71f" (UID: "0739d8f4-ba25-4a9e-a618-fc129948c71f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:37:22.387659 kubelet[2785]: I0517 00:37:22.383192 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0739d8f4-ba25-4a9e-a618-fc129948c71f" (UID: "0739d8f4-ba25-4a9e-a618-fc129948c71f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:37:22.391812 systemd[1]: var-lib-kubelet-pods-0739d8f4\x2dba25\x2d4a9e\x2da618\x2dfc129948c71f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:37:22.384000 audit[4463]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe2c98a510 a2=0 a3=7ffe2c98a4fc items=0 ppid=4305 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:22.384000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:22.394952 kubelet[2785]: I0517 00:37:22.394913 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0739d8f4-ba25-4a9e-a618-fc129948c71f-kube-api-access-kjggb" (OuterVolumeSpecName: "kube-api-access-kjggb") pod "0739d8f4-ba25-4a9e-a618-fc129948c71f" (UID: "0739d8f4-ba25-4a9e-a618-fc129948c71f"). InnerVolumeSpecName "kube-api-access-kjggb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:37:22.458542 kubelet[2785]: I0517 00:37:22.458405 2785 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjggb\" (UniqueName: \"kubernetes.io/projected/0739d8f4-ba25-4a9e-a618-fc129948c71f-kube-api-access-kjggb\") on node \"ip-172-31-26-143\" DevicePath \"\"" May 17 00:37:22.458542 kubelet[2785]: I0517 00:37:22.458446 2785 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-ca-bundle\") on node \"ip-172-31-26-143\" DevicePath \"\"" May 17 00:37:22.458542 kubelet[2785]: I0517 00:37:22.458456 2785 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0739d8f4-ba25-4a9e-a618-fc129948c71f-whisker-backend-key-pair\") on node \"ip-172-31-26-143\" DevicePath \"\"" May 17 00:37:22.860858 kubelet[2785]: I0517 00:37:22.860793 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/24aa9389-753f-4495-bc1f-dde18df8a4e1-whisker-backend-key-pair\") pod \"whisker-6686775679-47m7l\" (UID: \"24aa9389-753f-4495-bc1f-dde18df8a4e1\") " pod="calico-system/whisker-6686775679-47m7l" May 17 00:37:22.860858 kubelet[2785]: I0517 00:37:22.860866 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24aa9389-753f-4495-bc1f-dde18df8a4e1-whisker-ca-bundle\") pod \"whisker-6686775679-47m7l\" (UID: \"24aa9389-753f-4495-bc1f-dde18df8a4e1\") " pod="calico-system/whisker-6686775679-47m7l" May 17 00:37:22.861085 kubelet[2785]: I0517 00:37:22.860886 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhwk\" (UniqueName: \"kubernetes.io/projected/24aa9389-753f-4495-bc1f-dde18df8a4e1-kube-api-access-wbhwk\") pod \"whisker-6686775679-47m7l\" (UID: \"24aa9389-753f-4495-bc1f-dde18df8a4e1\") " pod="calico-system/whisker-6686775679-47m7l" May 17 00:37:23.073220 env[1842]: time="2025-05-17T00:37:23.073075336Z" level=info msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" May 17 00:37:23.086755 kubelet[2785]: I0517 00:37:23.086699 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0739d8f4-ba25-4a9e-a618-fc129948c71f" path="/var/lib/kubelet/pods/0739d8f4-ba25-4a9e-a618-fc129948c71f/volumes" May 17 00:37:23.102417 env[1842]: time="2025-05-17T00:37:23.102355355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6686775679-47m7l,Uid:24aa9389-753f-4495-bc1f-dde18df8a4e1,Namespace:calico-system,Attempt:0,}" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.147 [INFO][4488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.147 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" iface="eth0" netns="/var/run/netns/cni-e5471a99-0be3-4cc4-ef48-a10bdb205d04" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.147 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" iface="eth0" netns="/var/run/netns/cni-e5471a99-0be3-4cc4-ef48-a10bdb205d04" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.170 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" iface="eth0" netns="/var/run/netns/cni-e5471a99-0be3-4cc4-ef48-a10bdb205d04" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.171 [INFO][4488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.171 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.276 [INFO][4507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.277 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.277 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.291 [WARNING][4507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.291 [INFO][4507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.293 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:23.302805 env[1842]: 2025-05-17 00:37:23.297 [INFO][4488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:23.306960 systemd[1]: run-netns-cni\x2de5471a99\x2d0be3\x2d4cc4\x2def48\x2da10bdb205d04.mount: Deactivated successfully. May 17 00:37:23.309172 env[1842]: time="2025-05-17T00:37:23.309109969Z" level=info msg="TearDown network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" successfully" May 17 00:37:23.309172 env[1842]: time="2025-05-17T00:37:23.309166147Z" level=info msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" returns successfully" May 17 00:37:23.310091 env[1842]: time="2025-05-17T00:37:23.310067675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95gq,Uid:8ad07685-9392-4209-bbe3-44ad549c6102,Namespace:kube-system,Attempt:1,}" May 17 00:37:23.411162 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL May 17 00:37:23.435561 (udev-worker)[4435]: Network interface NamePolicy= disabled on kernel command line. May 17 00:37:23.441203 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:37:23.441324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliae2076718f7: link becomes ready May 17 00:37:23.441514 systemd-networkd[1517]: caliae2076718f7: Link UP May 17 00:37:23.441813 systemd-networkd[1517]: caliae2076718f7: Gained carrier May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.295 [INFO][4496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0 whisker-6686775679- calico-system 24aa9389-753f-4495-bc1f-dde18df8a4e1 926 0 2025-05-17 00:37:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6686775679 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-143 whisker-6686775679-47m7l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliae2076718f7 [] [] }} ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.295 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.354 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" HandleID="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Workload="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.354 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" HandleID="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Workload="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1630), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-143", "pod":"whisker-6686775679-47m7l", "timestamp":"2025-05-17 00:37:23.354315726 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.354 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.354 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.354 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.368 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.384 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.390 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.396 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.399 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.399 [INFO][4516] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.402 [INFO][4516] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067 May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.410 [INFO][4516] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.426 [INFO][4516] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.129/26] block=192.168.123.128/26 handle="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.426 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.129/26] handle="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" host="ip-172-31-26-143" May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.426 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:23.469176 env[1842]: 2025-05-17 00:37:23.426 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.129/26] IPv6=[] ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" HandleID="k8s-pod-network.1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Workload="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.431 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0", GenerateName:"whisker-6686775679-", Namespace:"calico-system", SelfLink:"", UID:"24aa9389-753f-4495-bc1f-dde18df8a4e1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6686775679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"whisker-6686775679-47m7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae2076718f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.431 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.129/32] ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.431 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae2076718f7 ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.444 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.444 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0", GenerateName:"whisker-6686775679-", Namespace:"calico-system", SelfLink:"", UID:"24aa9389-753f-4495-bc1f-dde18df8a4e1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6686775679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067", Pod:"whisker-6686775679-47m7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae2076718f7", MAC:"52:eb:8f:16:f6:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:23.471224 env[1842]: 2025-05-17 00:37:23.466 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067" Namespace="calico-system" Pod="whisker-6686775679-47m7l" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--6686775679--47m7l-eth0" May 17 00:37:23.529227 env[1842]: time="2025-05-17T00:37:23.529118631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:23.529592 env[1842]: time="2025-05-17T00:37:23.529530901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:23.530727 env[1842]: time="2025-05-17T00:37:23.530676369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:23.535242 env[1842]: time="2025-05-17T00:37:23.535167503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067 pid=4558 runtime=io.containerd.runc.v2 May 17 00:37:23.529000 audit[4559]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:23.529000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffe49e14b80 a2=0 a3=7ffe49e14b6c items=0 ppid=4305 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:23.529000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:23.601376 systemd-networkd[1517]: calic1a1509cc20: Link UP May 17 00:37:23.603869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1a1509cc20: link becomes ready May 17 00:37:23.604112 systemd-networkd[1517]: calic1a1509cc20: Gained carrier May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.443 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0 coredns-7c65d6cfc9- kube-system 8ad07685-9392-4209-bbe3-44ad549c6102 929 0 2025-05-17 00:36:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-143 coredns-7c65d6cfc9-q95gq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1a1509cc20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.446 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.531 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" HandleID="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.532 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" HandleID="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-143", "pod":"coredns-7c65d6cfc9-q95gq", "timestamp":"2025-05-17 00:37:23.531760669 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.532 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.532 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.532 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.549 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.555 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.562 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.564 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.570 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.570 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.572 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.581 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.593 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.130/26] block=192.168.123.128/26 handle="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.593 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.130/26] handle="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" host="ip-172-31-26-143" May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.593 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:23.636786 env[1842]: 2025-05-17 00:37:23.593 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.130/26] IPv6=[] ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" HandleID="k8s-pod-network.9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.597 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ad07685-9392-4209-bbe3-44ad549c6102", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"coredns-7c65d6cfc9-q95gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1509cc20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.598 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.130/32] ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.598 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1a1509cc20 ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.613 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.613 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ad07685-9392-4209-bbe3-44ad549c6102", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa", Pod:"coredns-7c65d6cfc9-q95gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1509cc20", MAC:"8a:a5:cc:eb:f5:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:23.642699 env[1842]: 2025-05-17 00:37:23.630 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95gq" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:23.679262 env[1842]: time="2025-05-17T00:37:23.679161938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:23.679575 env[1842]: time="2025-05-17T00:37:23.679521989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:23.679757 env[1842]: time="2025-05-17T00:37:23.679729509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:23.684081 env[1842]: time="2025-05-17T00:37:23.684001375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa pid=4607 runtime=io.containerd.runc.v2 May 17 00:37:23.691522 env[1842]: time="2025-05-17T00:37:23.691311912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6686775679-47m7l,Uid:24aa9389-753f-4495-bc1f-dde18df8a4e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"1779d3a3d7bb87e04a1e52907efecdda5a0c2545e05977ec32e5b42c7d53d067\"" May 17 00:37:23.696365 env[1842]: time="2025-05-17T00:37:23.696312334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:37:23.699000 audit[4625]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:23.699000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd73f70eb0 a2=0 a3=7ffd73f70e9c items=0 ppid=4305 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:23.699000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:23.763266 env[1842]: time="2025-05-17T00:37:23.763210191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95gq,Uid:8ad07685-9392-4209-bbe3-44ad549c6102,Namespace:kube-system,Attempt:1,} returns sandbox id \"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa\"" May 17 00:37:23.767249 env[1842]: time="2025-05-17T00:37:23.767200904Z" level=info msg="CreateContainer within sandbox \"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:37:23.801140 env[1842]: time="2025-05-17T00:37:23.801064394Z" level=info msg="CreateContainer within sandbox \"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eec74337ada48309497e2eb5c137549536d35a6115312610af023b7c097a188e\"" May 17 00:37:23.801752 env[1842]: time="2025-05-17T00:37:23.801627519Z" level=info msg="StartContainer for \"eec74337ada48309497e2eb5c137549536d35a6115312610af023b7c097a188e\"" May 17 00:37:23.873176 env[1842]: time="2025-05-17T00:37:23.873060822Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:23.876799 env[1842]: time="2025-05-17T00:37:23.876740876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:23.877061 kubelet[2785]: E0517 00:37:23.876996 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:37:23.877374 kubelet[2785]: E0517 00:37:23.877059 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:37:23.889909 kubelet[2785]: E0517 00:37:23.889803 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fecda21b58645b7beb0200da9036e4d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:23.894295 env[1842]: time="2025-05-17T00:37:23.894250809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:37:23.897651 env[1842]: time="2025-05-17T00:37:23.897596828Z" level=info msg="StartContainer for \"eec74337ada48309497e2eb5c137549536d35a6115312610af023b7c097a188e\" returns successfully" May 17 00:37:24.064783 env[1842]: time="2025-05-17T00:37:24.064691286Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:24.066989 env[1842]: time="2025-05-17T00:37:24.066927277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:24.067282 kubelet[2785]: E0517 00:37:24.067244 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:37:24.067363 kubelet[2785]: E0517 00:37:24.067297 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:37:24.067509 kubelet[2785]: E0517 00:37:24.067419 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:24.069618 kubelet[2785]: E0517 00:37:24.068716 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:37:24.072799 env[1842]: time="2025-05-17T00:37:24.072752833Z" level=info msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.129 [INFO][4691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.130 [INFO][4691] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" iface="eth0" netns="/var/run/netns/cni-d8784266-59f1-8806-0762-49e0d71b841e" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.130 [INFO][4691] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" iface="eth0" netns="/var/run/netns/cni-d8784266-59f1-8806-0762-49e0d71b841e" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.130 [INFO][4691] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" iface="eth0" netns="/var/run/netns/cni-d8784266-59f1-8806-0762-49e0d71b841e" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.130 [INFO][4691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.130 [INFO][4691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.160 [INFO][4698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.161 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.161 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.167 [WARNING][4698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.167 [INFO][4698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.169 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:24.174015 env[1842]: 2025-05-17 00:37:24.171 [INFO][4691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:24.183491 env[1842]: time="2025-05-17T00:37:24.175533862Z" level=info msg="TearDown network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" successfully" May 17 00:37:24.183491 env[1842]: time="2025-05-17T00:37:24.175584677Z" level=info msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" returns successfully" May 17 00:37:24.183491 env[1842]: time="2025-05-17T00:37:24.176599213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-m25cz,Uid:d4a0e73a-3af9-4fe3-8741-031055e915ab,Namespace:calico-system,Attempt:1,}" May 17 00:37:24.181573 systemd[1]: run-netns-cni\x2dd8784266\x2d59f1\x2d8806\x2d0762\x2d49e0d71b841e.mount: Deactivated successfully. May 17 00:37:24.333002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibe451daeac4: link becomes ready May 17 00:37:24.332699 systemd-networkd[1517]: calibe451daeac4: Link UP May 17 00:37:24.333401 systemd-networkd[1517]: calibe451daeac4: Gained carrier May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.251 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0 goldmane-8f77d7b6c- calico-system d4a0e73a-3af9-4fe3-8741-031055e915ab 949 0 2025-05-17 00:36:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-143 goldmane-8f77d7b6c-m25cz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibe451daeac4 [] [] }} ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.251 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.281 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" HandleID="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.281 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" HandleID="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9070), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-143", "pod":"goldmane-8f77d7b6c-m25cz", "timestamp":"2025-05-17 00:37:24.281113098 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.281 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.281 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.281 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.291 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.297 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.303 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.305 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.308 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.308 [INFO][4716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.310 [INFO][4716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709 May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.316 [INFO][4716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.325 [INFO][4716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.131/26] block=192.168.123.128/26 handle="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.325 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.131/26] handle="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" host="ip-172-31-26-143" May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.325 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:24.349565 env[1842]: 2025-05-17 00:37:24.325 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.131/26] IPv6=[] ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" HandleID="k8s-pod-network.3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.328 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"d4a0e73a-3af9-4fe3-8741-031055e915ab", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"goldmane-8f77d7b6c-m25cz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe451daeac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.328 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.131/32] ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.328 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe451daeac4 ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.333 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.333 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"d4a0e73a-3af9-4fe3-8741-031055e915ab", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709", Pod:"goldmane-8f77d7b6c-m25cz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe451daeac4", MAC:"ba:40:b0:31:db:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:24.350463 env[1842]: 2025-05-17 00:37:24.347 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709" Namespace="calico-system" Pod="goldmane-8f77d7b6c-m25cz" WorkloadEndpoint="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:24.364245 env[1842]: time="2025-05-17T00:37:24.363827288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:24.364245 env[1842]: time="2025-05-17T00:37:24.363930991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:24.364245 env[1842]: time="2025-05-17T00:37:24.363980018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:24.364245 env[1842]: time="2025-05-17T00:37:24.364145458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709 pid=4742 runtime=io.containerd.runc.v2 May 17 00:37:24.369000 audit[4746]: NETFILTER_CFG table=filter:107 family=2 entries=48 op=nft_register_chain pid=4746 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:24.369000 audit[4746]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffd2ff99bc0 a2=0 a3=7ffd2ff99bac items=0 ppid=4305 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:24.369000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:24.407737 kubelet[2785]: E0517 00:37:24.407690 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:37:24.494766 env[1842]: time="2025-05-17T00:37:24.494703540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-m25cz,Uid:d4a0e73a-3af9-4fe3-8741-031055e915ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709\"" May 17 00:37:24.494000 audit[4779]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:24.494000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe023cbe70 a2=0 a3=7ffe023cbe5c items=0 ppid=2892 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:24.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:24.498000 audit[4779]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:24.502091 env[1842]: time="2025-05-17T00:37:24.497402960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:37:24.499885 systemd-networkd[1517]: caliae2076718f7: Gained IPv6LL May 17 00:37:24.498000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe023cbe70 a2=0 a3=0 items=0 ppid=2892 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:24.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:24.510000 audit[4781]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:24.510000 audit[4781]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1e33c580 a2=0 a3=7fff1e33c56c items=0 ppid=2892 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:24.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:24.516000 audit[4781]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:24.516000 audit[4781]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff1e33c580 a2=0 a3=0 items=0 ppid=2892 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:24.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:24.678401 env[1842]: time="2025-05-17T00:37:24.678330656Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:24.680984 env[1842]: time="2025-05-17T00:37:24.680681104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:24.681162 kubelet[2785]: E0517 00:37:24.680938 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:37:24.681162 kubelet[2785]: E0517 00:37:24.681100 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:37:24.681320 kubelet[2785]: E0517 00:37:24.681266 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:24.682741 kubelet[2785]: E0517 00:37:24.682666 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:24.690963 systemd-networkd[1517]: calic1a1509cc20: Gained IPv6LL May 17 00:37:25.072358 env[1842]: time="2025-05-17T00:37:25.072317938Z" level=info msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" May 17 00:37:25.143882 kubelet[2785]: I0517 00:37:25.143626 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q95gq" podStartSLOduration=43.143605025 podStartE2EDuration="43.143605025s" podCreationTimestamp="2025-05-17 00:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:24.452894303 +0000 UTC m=+45.676283048" watchObservedRunningTime="2025-05-17 00:37:25.143605025 +0000 UTC m=+46.366993770" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.145 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.145 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" iface="eth0" netns="/var/run/netns/cni-79688c6a-7f67-2709-c3d4-a9ebc376f52a" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.145 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" iface="eth0" netns="/var/run/netns/cni-79688c6a-7f67-2709-c3d4-a9ebc376f52a" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.149 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" iface="eth0" netns="/var/run/netns/cni-79688c6a-7f67-2709-c3d4-a9ebc376f52a" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.149 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.149 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.210 [INFO][4802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.211 [INFO][4802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.211 [INFO][4802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.220 [WARNING][4802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.220 [INFO][4802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.222 [INFO][4802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:25.230173 env[1842]: 2025-05-17 00:37:25.225 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:25.239256 env[1842]: time="2025-05-17T00:37:25.233945286Z" level=info msg="TearDown network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" successfully" May 17 00:37:25.239256 env[1842]: time="2025-05-17T00:37:25.234003250Z" level=info msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" returns successfully" May 17 00:37:25.234550 systemd[1]: run-netns-cni\x2d79688c6a\x2d7f67\x2d2709\x2dc3d4\x2da9ebc376f52a.mount: Deactivated successfully. May 17 00:37:25.240368 env[1842]: time="2025-05-17T00:37:25.240298860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z72kr,Uid:7d455070-45c6-475a-be66-925b4a2071bc,Namespace:calico-system,Attempt:1,}" May 17 00:37:25.424631 systemd-networkd[1517]: calic1551bc05b6: Link UP May 17 00:37:25.427706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:37:25.427952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1551bc05b6: link becomes ready May 17 00:37:25.428488 systemd-networkd[1517]: calic1551bc05b6: Gained carrier May 17 00:37:25.436206 kubelet[2785]: E0517 00:37:25.436157 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:25.440089 kubelet[2785]: E0517 00:37:25.440044 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.340 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0 csi-node-driver- calico-system 7d455070-45c6-475a-be66-925b4a2071bc 968 0 2025-05-17 00:36:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-143 csi-node-driver-z72kr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic1551bc05b6 [] [] }} ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.340 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.371 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" HandleID="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.371 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" HandleID="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-143", "pod":"csi-node-driver-z72kr", "timestamp":"2025-05-17 00:37:25.371493929 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.371 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.371 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.372 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.379 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.386 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.392 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.394 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.397 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.397 [INFO][4820] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.402 [INFO][4820] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09 May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.408 [INFO][4820] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.418 [INFO][4820] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.132/26] block=192.168.123.128/26 handle="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.418 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.132/26] handle="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" host="ip-172-31-26-143" May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.418 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:25.469530 env[1842]: 2025-05-17 00:37:25.418 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.132/26] IPv6=[] ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" HandleID="k8s-pod-network.2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.421 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d455070-45c6-475a-be66-925b4a2071bc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"csi-node-driver-z72kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1551bc05b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.421 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.132/32] ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.421 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1551bc05b6 ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.429 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.440 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d455070-45c6-475a-be66-925b4a2071bc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09", Pod:"csi-node-driver-z72kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1551bc05b6", MAC:"a2:0f:04:20:68:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:25.470206 env[1842]: 2025-05-17 00:37:25.466 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09" Namespace="calico-system" Pod="csi-node-driver-z72kr" WorkloadEndpoint="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:25.510825 env[1842]: time="2025-05-17T00:37:25.510739114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:25.511018 env[1842]: time="2025-05-17T00:37:25.510869433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:25.511018 env[1842]: time="2025-05-17T00:37:25.510923298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:25.511183 env[1842]: time="2025-05-17T00:37:25.511133661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09 pid=4844 runtime=io.containerd.runc.v2 May 17 00:37:25.524000 audit[4854]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:25.524000 audit[4854]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf4e9bbc0 a2=0 a3=7ffdf4e9bbac items=0 ppid=2892 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:25.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:25.531000 audit[4854]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:25.531000 audit[4854]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf4e9bbc0 a2=0 a3=0 items=0 ppid=2892 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:25.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:25.543000 audit[4855]: NETFILTER_CFG table=filter:114 family=2 entries=50 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:25.543000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7fff11e27150 a2=0 a3=7fff11e2713c items=0 ppid=4305 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:25.543000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:25.586551 env[1842]: time="2025-05-17T00:37:25.585988009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z72kr,Uid:7d455070-45c6-475a-be66-925b4a2071bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09\"" May 17 00:37:25.591064 env[1842]: time="2025-05-17T00:37:25.591018951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:37:26.072021 env[1842]: time="2025-05-17T00:37:26.071925230Z" level=info msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" May 17 00:37:26.072353 env[1842]: time="2025-05-17T00:37:26.072012558Z" level=info msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" May 17 00:37:26.100485 systemd-networkd[1517]: calibe451daeac4: Gained IPv6LL May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.168 [INFO][4900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.168 [INFO][4900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" iface="eth0" netns="/var/run/netns/cni-ef01465b-994b-4cb1-7682-d6f09155768c" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.168 [INFO][4900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" iface="eth0" netns="/var/run/netns/cni-ef01465b-994b-4cb1-7682-d6f09155768c" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.169 [INFO][4900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" iface="eth0" netns="/var/run/netns/cni-ef01465b-994b-4cb1-7682-d6f09155768c" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.169 [INFO][4900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.169 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.296 [INFO][4912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.296 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.297 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.305 [WARNING][4912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.305 [INFO][4912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.307 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:26.312316 env[1842]: 2025-05-17 00:37:26.310 [INFO][4900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:26.318296 systemd[1]: run-netns-cni\x2def01465b\x2d994b\x2d4cb1\x2d7682\x2dd6f09155768c.mount: Deactivated successfully. May 17 00:37:26.320554 env[1842]: time="2025-05-17T00:37:26.320505646Z" level=info msg="TearDown network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" successfully" May 17 00:37:26.320687 env[1842]: time="2025-05-17T00:37:26.320666449Z" level=info msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" returns successfully" May 17 00:37:26.321591 env[1842]: time="2025-05-17T00:37:26.321561279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7487d56f97-mnmbq,Uid:10bbdf63-9286-463c-aba2-5278ed8400da,Namespace:calico-system,Attempt:1,}" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" iface="eth0" netns="/var/run/netns/cni-9acad7b4-3871-1552-6d86-f10174f090b5" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" iface="eth0" netns="/var/run/netns/cni-9acad7b4-3871-1552-6d86-f10174f090b5" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" iface="eth0" netns="/var/run/netns/cni-9acad7b4-3871-1552-6d86-f10174f090b5" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.183 [INFO][4899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.306 [INFO][4917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.306 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.308 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.328 [WARNING][4917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.328 [INFO][4917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.338 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:26.354469 env[1842]: 2025-05-17 00:37:26.341 [INFO][4899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:26.354469 env[1842]: time="2025-05-17T00:37:26.347923771Z" level=info msg="TearDown network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" successfully" May 17 00:37:26.354469 env[1842]: time="2025-05-17T00:37:26.347983556Z" level=info msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" returns successfully" May 17 00:37:26.354469 env[1842]: time="2025-05-17T00:37:26.348816195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvwnn,Uid:cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce,Namespace:kube-system,Attempt:1,}" May 17 00:37:26.348107 systemd[1]: run-netns-cni\x2d9acad7b4\x2d3871\x2d1552\x2d6d86\x2df10174f090b5.mount: Deactivated successfully. May 17 00:37:26.466083 kubelet[2785]: E0517 00:37:26.466039 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:26.670913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:37:26.671091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf9601efd43: link becomes ready May 17 00:37:26.670192 systemd-networkd[1517]: califf9601efd43: Link UP May 17 00:37:26.672264 systemd-networkd[1517]: califf9601efd43: Gained carrier May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.533 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0 calico-kube-controllers-7487d56f97- calico-system 10bbdf63-9286-463c-aba2-5278ed8400da 992 0 2025-05-17 00:36:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7487d56f97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-143 calico-kube-controllers-7487d56f97-mnmbq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf9601efd43 [] [] }} ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.534 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.594 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" HandleID="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.595 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" HandleID="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-143", "pod":"calico-kube-controllers-7487d56f97-mnmbq", "timestamp":"2025-05-17 00:37:26.594669917 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.595 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.595 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.595 [INFO][4947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.603 [INFO][4947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.616 [INFO][4947] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.629 [INFO][4947] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.631 [INFO][4947] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.637 [INFO][4947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.637 [INFO][4947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.640 [INFO][4947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.646 [INFO][4947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.661 [INFO][4947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.133/26] block=192.168.123.128/26 handle="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.661 [INFO][4947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.133/26] handle="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" host="ip-172-31-26-143" May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.661 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:26.716969 env[1842]: 2025-05-17 00:37:26.661 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.133/26] IPv6=[] ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" HandleID="k8s-pod-network.0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.664 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0", GenerateName:"calico-kube-controllers-7487d56f97-", Namespace:"calico-system", SelfLink:"", UID:"10bbdf63-9286-463c-aba2-5278ed8400da", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7487d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"calico-kube-controllers-7487d56f97-mnmbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf9601efd43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.665 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.133/32] ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.666 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf9601efd43 ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.674 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.674 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0", GenerateName:"calico-kube-controllers-7487d56f97-", Namespace:"calico-system", SelfLink:"", UID:"10bbdf63-9286-463c-aba2-5278ed8400da", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7487d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c", Pod:"calico-kube-controllers-7487d56f97-mnmbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf9601efd43", MAC:"ea:b9:8a:35:4b:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:26.724605 env[1842]: 2025-05-17 00:37:26.703 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c" Namespace="calico-system" Pod="calico-kube-controllers-7487d56f97-mnmbq" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:26.744000 audit[4968]: NETFILTER_CFG table=filter:115 family=2 entries=44 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:26.746991 kernel: kauditd_printk_skb: 541 callbacks suppressed May 17 00:37:26.747084 kernel: audit: type=1325 audit(1747442246.744:412): table=filter:115 family=2 entries=44 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:26.744000 audit[4968]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffce3a8c9d0 a2=0 a3=7ffce3a8c9bc items=0 ppid=4305 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:26.754652 kernel: audit: type=1300 audit(1747442246.744:412): arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffce3a8c9d0 a2=0 a3=7ffce3a8c9bc items=0 ppid=4305 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:26.744000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:26.761930 kernel: audit: type=1327 audit(1747442246.744:412): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:26.795640 systemd-networkd[1517]: caliad068b2eb93: Link UP May 17 00:37:26.798107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliad068b2eb93: link becomes ready May 17 00:37:26.798100 systemd-networkd[1517]: caliad068b2eb93: Gained carrier May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.579 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0 coredns-7c65d6cfc9- kube-system cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce 995 0 2025-05-17 00:36:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-143 coredns-7c65d6cfc9-pvwnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad068b2eb93 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.580 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.680 [INFO][4956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" HandleID="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.682 [INFO][4956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" HandleID="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e160), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-143", "pod":"coredns-7c65d6cfc9-pvwnn", "timestamp":"2025-05-17 00:37:26.680656441 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.682 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.682 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.682 [INFO][4956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.712 [INFO][4956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.731 [INFO][4956] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.738 [INFO][4956] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.742 [INFO][4956] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.756 [INFO][4956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.756 [INFO][4956] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.763 [INFO][4956] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.774 [INFO][4956] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.787 [INFO][4956] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.134/26] block=192.168.123.128/26 handle="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.788 [INFO][4956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.134/26] handle="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" host="ip-172-31-26-143" May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.788 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:26.832944 env[1842]: 2025-05-17 00:37:26.788 [INFO][4956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.134/26] IPv6=[] ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" HandleID="k8s-pod-network.93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.790 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"coredns-7c65d6cfc9-pvwnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad068b2eb93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.791 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.134/32] ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.791 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad068b2eb93 ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.799 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.800 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa", Pod:"coredns-7c65d6cfc9-pvwnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad068b2eb93", MAC:"52:bb:9b:0f:d8:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:26.834117 env[1842]: 2025-05-17 00:37:26.826 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pvwnn" WorkloadEndpoint="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:26.847884 env[1842]: time="2025-05-17T00:37:26.846773168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:26.847884 env[1842]: time="2025-05-17T00:37:26.846948985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:26.847884 env[1842]: time="2025-05-17T00:37:26.846987148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:26.847884 env[1842]: time="2025-05-17T00:37:26.847345386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c pid=4982 runtime=io.containerd.runc.v2 May 17 00:37:26.866983 systemd-networkd[1517]: calic1551bc05b6: Gained IPv6LL May 17 00:37:26.883381 kernel: audit: type=1325 audit(1747442246.874:413): table=filter:116 family=2 entries=50 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:26.883560 kernel: audit: type=1300 audit(1747442246.874:413): arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffe94e91030 a2=0 a3=7ffe94e9101c items=0 ppid=4305 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:26.874000 audit[5004]: NETFILTER_CFG table=filter:116 family=2 entries=50 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:26.874000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffe94e91030 a2=0 a3=7ffe94e9101c items=0 ppid=4305 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:26.883884 env[1842]: time="2025-05-17T00:37:26.882446894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:26.883884 env[1842]: time="2025-05-17T00:37:26.882535321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:26.883884 env[1842]: time="2025-05-17T00:37:26.882565582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:26.883884 env[1842]: time="2025-05-17T00:37:26.882753457Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa pid=5012 runtime=io.containerd.runc.v2 May 17 00:37:26.874000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:26.887851 kernel: audit: type=1327 audit(1747442246.874:413): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:27.035527 env[1842]: time="2025-05-17T00:37:27.035475827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7487d56f97-mnmbq,Uid:10bbdf63-9286-463c-aba2-5278ed8400da,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c\"" May 17 00:37:27.046556 env[1842]: time="2025-05-17T00:37:27.046507900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvwnn,Uid:cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa\"" May 17 00:37:27.053286 env[1842]: time="2025-05-17T00:37:27.053232971Z" level=info msg="CreateContainer within sandbox \"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:37:27.098763 env[1842]: time="2025-05-17T00:37:27.098708684Z" level=info msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" May 17 00:37:27.110057 env[1842]: time="2025-05-17T00:37:27.110000033Z" level=info msg="CreateContainer within sandbox \"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae61ad3fd44350caf6c20b5d412042c9b48fd3c026cdceb0e2feebb34517f8c0\"" May 17 00:37:27.120909 env[1842]: time="2025-05-17T00:37:27.120870431Z" level=info msg="StartContainer for \"ae61ad3fd44350caf6c20b5d412042c9b48fd3c026cdceb0e2feebb34517f8c0\"" May 17 00:37:27.277400 env[1842]: time="2025-05-17T00:37:27.277344687Z" level=info msg="StartContainer for \"ae61ad3fd44350caf6c20b5d412042c9b48fd3c026cdceb0e2feebb34517f8c0\" returns successfully" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.241 [INFO][5080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.242 [INFO][5080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" iface="eth0" netns="/var/run/netns/cni-321ec7fb-9ae9-0ce2-f8cc-135d56db1c8e" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.242 [INFO][5080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" iface="eth0" netns="/var/run/netns/cni-321ec7fb-9ae9-0ce2-f8cc-135d56db1c8e" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.242 [INFO][5080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" iface="eth0" netns="/var/run/netns/cni-321ec7fb-9ae9-0ce2-f8cc-135d56db1c8e" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.242 [INFO][5080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.242 [INFO][5080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.334 [INFO][5108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.334 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.334 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.344 [WARNING][5108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.345 [INFO][5108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.354 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:27.377966 env[1842]: 2025-05-17 00:37:27.363 [INFO][5080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:27.371712 systemd[1]: run-netns-cni\x2d321ec7fb\x2d9ae9\x2d0ce2\x2df8cc\x2d135d56db1c8e.mount: Deactivated successfully. May 17 00:37:27.379645 env[1842]: time="2025-05-17T00:37:27.379590131Z" level=info msg="TearDown network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" successfully" May 17 00:37:27.379765 env[1842]: time="2025-05-17T00:37:27.379747221Z" level=info msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" returns successfully" May 17 00:37:27.380777 env[1842]: time="2025-05-17T00:37:27.380741330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-7jqmj,Uid:e32dedf8-419f-45be-b1fc-aa56e4bf91b3,Namespace:calico-apiserver,Attempt:1,}" May 17 00:37:27.437581 env[1842]: time="2025-05-17T00:37:27.437533826Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:27.442633 env[1842]: time="2025-05-17T00:37:27.442584793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:27.446166 env[1842]: time="2025-05-17T00:37:27.446117959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:27.448280 env[1842]: time="2025-05-17T00:37:27.448238874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:37:27.448704 env[1842]: time="2025-05-17T00:37:27.448670144Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:27.454239 env[1842]: time="2025-05-17T00:37:27.454199835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:37:27.456811 env[1842]: time="2025-05-17T00:37:27.456766727Z" level=info msg="CreateContainer within sandbox \"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:37:27.490627 kubelet[2785]: I0517 00:37:27.487860 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pvwnn" podStartSLOduration=45.487818091 podStartE2EDuration="45.487818091s" podCreationTimestamp="2025-05-17 00:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:27.486572404 +0000 UTC m=+48.709961150" watchObservedRunningTime="2025-05-17 00:37:27.487818091 +0000 UTC m=+48.711206837" May 17 00:37:27.526164 env[1842]: time="2025-05-17T00:37:27.526006616Z" level=info msg="CreateContainer within sandbox \"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8fd522bc118b8836825c3df4603e01e9500f423b648750eb397416509d823637\"" May 17 00:37:27.526000 audit[5135]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:27.526000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd580ddd40 a2=0 a3=7ffd580ddd2c items=0 ppid=2892 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:27.532971 env[1842]: time="2025-05-17T00:37:27.532922733Z" level=info msg="StartContainer for \"8fd522bc118b8836825c3df4603e01e9500f423b648750eb397416509d823637\"" May 17 00:37:27.535424 kernel: audit: type=1325 audit(1747442247.526:414): table=filter:117 family=2 entries=20 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:27.535527 kernel: audit: type=1300 audit(1747442247.526:414): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd580ddd40 a2=0 a3=7ffd580ddd2c items=0 ppid=2892 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:27.538614 kernel: audit: type=1327 audit(1747442247.526:414): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:27.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:27.535000 audit[5135]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:27.546852 kernel: audit: type=1325 audit(1747442247.535:415): table=nat:118 family=2 entries=14 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:27.535000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd580ddd40 a2=0 a3=0 items=0 ppid=2892 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:27.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:27.695487 env[1842]: time="2025-05-17T00:37:27.694173251Z" level=info msg="StartContainer for \"8fd522bc118b8836825c3df4603e01e9500f423b648750eb397416509d823637\" returns successfully" May 17 00:37:27.742265 systemd-networkd[1517]: cali42336864ccc: Link UP May 17 00:37:27.743887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:37:27.743973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali42336864ccc: link becomes ready May 17 00:37:27.747193 systemd-networkd[1517]: cali42336864ccc: Gained carrier May 17 00:37:27.769867 systemd-networkd[1517]: califf9601efd43: Gained IPv6LL May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.588 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0 calico-apiserver-74446499d9- calico-apiserver e32dedf8-419f-45be-b1fc-aa56e4bf91b3 1011 0 2025-05-17 00:36:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74446499d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-143 calico-apiserver-74446499d9-7jqmj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42336864ccc [] [] }} ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.588 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.658 [INFO][5158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" HandleID="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.658 [INFO][5158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" HandleID="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-143", "pod":"calico-apiserver-74446499d9-7jqmj", "timestamp":"2025-05-17 00:37:27.658000599 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.658 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.658 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.658 [INFO][5158] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.669 [INFO][5158] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.682 [INFO][5158] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.691 [INFO][5158] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.700 [INFO][5158] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.706 [INFO][5158] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.706 [INFO][5158] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.709 [INFO][5158] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.715 [INFO][5158] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.729 [INFO][5158] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.135/26] block=192.168.123.128/26 handle="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.729 [INFO][5158] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.135/26] handle="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" host="ip-172-31-26-143" May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.729 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:27.781613 env[1842]: 2025-05-17 00:37:27.731 [INFO][5158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.135/26] IPv6=[] ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" HandleID="k8s-pod-network.6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.736 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32dedf8-419f-45be-b1fc-aa56e4bf91b3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"calico-apiserver-74446499d9-7jqmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42336864ccc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.736 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.135/32] ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.736 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42336864ccc ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.747 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.747 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32dedf8-419f-45be-b1fc-aa56e4bf91b3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a", Pod:"calico-apiserver-74446499d9-7jqmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42336864ccc", MAC:"f6:29:ea:80:cf:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:27.784219 env[1842]: 2025-05-17 00:37:27.772 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-7jqmj" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:27.809261 env[1842]: time="2025-05-17T00:37:27.808097220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:27.809261 env[1842]: time="2025-05-17T00:37:27.808184567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:27.809261 env[1842]: time="2025-05-17T00:37:27.808200886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:27.809261 env[1842]: time="2025-05-17T00:37:27.808447759Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a pid=5196 runtime=io.containerd.runc.v2 May 17 00:37:27.849000 audit[5206]: NETFILTER_CFG table=filter:119 family=2 entries=62 op=nft_register_chain pid=5206 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:27.849000 audit[5206]: SYSCALL arch=c000003e syscall=46 success=yes exit=31740 a0=3 a1=7ffec0781980 a2=0 a3=7ffec078196c items=0 ppid=4305 pid=5206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:27.849000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:27.961779 env[1842]: time="2025-05-17T00:37:27.961637384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-7jqmj,Uid:e32dedf8-419f-45be-b1fc-aa56e4bf91b3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a\"" May 17 00:37:28.072588 env[1842]: time="2025-05-17T00:37:28.072534791Z" level=info msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" May 17 00:37:28.147957 systemd-networkd[1517]: caliad068b2eb93: Gained IPv6LL May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.168 [INFO][5247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.168 [INFO][5247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" iface="eth0" netns="/var/run/netns/cni-f7163a61-3ef6-59f7-9bda-93b4c1d5dfbe" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.169 [INFO][5247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" iface="eth0" netns="/var/run/netns/cni-f7163a61-3ef6-59f7-9bda-93b4c1d5dfbe" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.169 [INFO][5247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" iface="eth0" netns="/var/run/netns/cni-f7163a61-3ef6-59f7-9bda-93b4c1d5dfbe" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.169 [INFO][5247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.169 [INFO][5247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.209 [INFO][5254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.210 [INFO][5254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.210 [INFO][5254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.216 [WARNING][5254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.216 [INFO][5254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.218 [INFO][5254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:28.223862 env[1842]: 2025-05-17 00:37:28.221 [INFO][5247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:28.225241 env[1842]: time="2025-05-17T00:37:28.224077342Z" level=info msg="TearDown network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" successfully" May 17 00:37:28.225241 env[1842]: time="2025-05-17T00:37:28.224120374Z" level=info msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" returns successfully" May 17 00:37:28.225241 env[1842]: time="2025-05-17T00:37:28.225186537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-stgnw,Uid:e86c1943-5287-43a1-8c5c-dfb67368014d,Namespace:calico-apiserver,Attempt:1,}" May 17 00:37:28.331392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173813019.mount: Deactivated successfully. May 17 00:37:28.331616 systemd[1]: run-netns-cni\x2df7163a61\x2d3ef6\x2d59f7\x2d9bda\x2d93b4c1d5dfbe.mount: Deactivated successfully. May 17 00:37:28.430087 systemd-networkd[1517]: calidcd0a3704fa: Link UP May 17 00:37:28.431564 systemd-networkd[1517]: calidcd0a3704fa: Gained carrier May 17 00:37:28.431981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidcd0a3704fa: link becomes ready May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.303 [INFO][5261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0 calico-apiserver-74446499d9- calico-apiserver e86c1943-5287-43a1-8c5c-dfb67368014d 1030 0 2025-05-17 00:36:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74446499d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-143 calico-apiserver-74446499d9-stgnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcd0a3704fa [] [] }} ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.304 [INFO][5261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.366 [INFO][5272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" HandleID="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.366 [INFO][5272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" HandleID="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-143", "pod":"calico-apiserver-74446499d9-stgnw", "timestamp":"2025-05-17 00:37:28.366194571 +0000 UTC"}, Hostname:"ip-172-31-26-143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.366 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.366 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.366 [INFO][5272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-143' May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.374 [INFO][5272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.381 [INFO][5272] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.387 [INFO][5272] ipam/ipam.go 511: Trying affinity for 192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.390 [INFO][5272] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.393 [INFO][5272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.393 [INFO][5272] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.400 [INFO][5272] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.408 [INFO][5272] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.420 [INFO][5272] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.136/26] block=192.168.123.128/26 handle="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.420 [INFO][5272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.136/26] handle="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" host="ip-172-31-26-143" May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.420 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:28.460433 env[1842]: 2025-05-17 00:37:28.420 [INFO][5272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.136/26] IPv6=[] ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" HandleID="k8s-pod-network.52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.425 [INFO][5261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e86c1943-5287-43a1-8c5c-dfb67368014d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"", Pod:"calico-apiserver-74446499d9-stgnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd0a3704fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.425 [INFO][5261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.136/32] ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.425 [INFO][5261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcd0a3704fa ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.433 [INFO][5261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.434 [INFO][5261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e86c1943-5287-43a1-8c5c-dfb67368014d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a", Pod:"calico-apiserver-74446499d9-stgnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd0a3704fa", MAC:"02:2c:8b:c6:77:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:28.461786 env[1842]: 2025-05-17 00:37:28.455 [INFO][5261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a" Namespace="calico-apiserver" Pod="calico-apiserver-74446499d9-stgnw" WorkloadEndpoint="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:28.482309 env[1842]: time="2025-05-17T00:37:28.482143241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:28.482309 env[1842]: time="2025-05-17T00:37:28.482202089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:28.482309 env[1842]: time="2025-05-17T00:37:28.482217101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:28.484687 env[1842]: time="2025-05-17T00:37:28.483486843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a pid=5295 runtime=io.containerd.runc.v2 May 17 00:37:28.518000 audit[5311]: NETFILTER_CFG table=filter:120 family=2 entries=53 op=nft_register_chain pid=5311 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:37:28.518000 audit[5311]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffcd17418a0 a2=0 a3=7ffcd174188c items=0 ppid=4305 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:28.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:37:28.752000 audit[5323]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=5323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:28.752000 audit[5323]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff09797890 a2=0 a3=7fff0979787c items=0 ppid=2892 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:28.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:28.762000 audit[5323]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=5323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:28.762000 audit[5323]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff09797890 a2=0 a3=7fff0979787c items=0 ppid=2892 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:28.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:28.818664 env[1842]: time="2025-05-17T00:37:28.818565156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74446499d9-stgnw,Uid:e86c1943-5287-43a1-8c5c-dfb67368014d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a\"" May 17 00:37:29.813197 systemd-networkd[1517]: cali42336864ccc: Gained IPv6LL May 17 00:37:29.939032 systemd-networkd[1517]: calidcd0a3704fa: Gained IPv6LL May 17 00:37:30.435131 env[1842]: time="2025-05-17T00:37:30.435067786Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.437611 env[1842]: time="2025-05-17T00:37:30.437550867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.439369 env[1842]: time="2025-05-17T00:37:30.439328552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.441423 env[1842]: time="2025-05-17T00:37:30.441367701Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.442140 env[1842]: time="2025-05-17T00:37:30.442100420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:37:30.443655 env[1842]: time="2025-05-17T00:37:30.443614249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:37:30.495416 env[1842]: time="2025-05-17T00:37:30.495348266Z" level=info msg="CreateContainer within sandbox \"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:37:30.522904 env[1842]: time="2025-05-17T00:37:30.522819524Z" level=info msg="CreateContainer within sandbox \"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66\"" May 17 00:37:30.525013 env[1842]: time="2025-05-17T00:37:30.523624410Z" level=info msg="StartContainer for \"6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66\"" May 17 00:37:30.627564 env[1842]: time="2025-05-17T00:37:30.627495227Z" level=info msg="StartContainer for \"6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66\" returns successfully" May 17 00:37:31.464325 systemd[1]: run-containerd-runc-k8s.io-6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66-runc.hUVd75.mount: Deactivated successfully. May 17 00:37:31.582000 audit[5380]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=5380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:31.582000 audit[5380]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff1a867910 a2=0 a3=7fff1a8678fc items=0 ppid=2892 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:31.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:31.624000 audit[5380]: NETFILTER_CFG table=nat:124 family=2 entries=56 op=nft_register_chain pid=5380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:31.624000 audit[5380]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff1a867910 a2=0 a3=7fff1a8678fc items=0 ppid=2892 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:31.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:31.654199 systemd[1]: run-containerd-runc-k8s.io-6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66-runc.25Zmu1.mount: Deactivated successfully. May 17 00:37:31.736064 kubelet[2785]: I0517 00:37:31.730341 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7487d56f97-mnmbq" podStartSLOduration=30.327269032 podStartE2EDuration="33.730309765s" podCreationTimestamp="2025-05-17 00:36:58 +0000 UTC" firstStartedPulling="2025-05-17 00:37:27.040333538 +0000 UTC m=+48.263722277" lastFinishedPulling="2025-05-17 00:37:30.443374274 +0000 UTC m=+51.666763010" observedRunningTime="2025-05-17 00:37:31.571174055 +0000 UTC m=+52.794562802" watchObservedRunningTime="2025-05-17 00:37:31.730309765 +0000 UTC m=+52.953698508" May 17 00:37:32.039457 env[1842]: time="2025-05-17T00:37:32.039407507Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:32.049732 env[1842]: time="2025-05-17T00:37:32.049677942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:32.056325 env[1842]: time="2025-05-17T00:37:32.056261829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:32.059773 env[1842]: time="2025-05-17T00:37:32.059692665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:32.061694 env[1842]: time="2025-05-17T00:37:32.060705890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:37:32.064547 env[1842]: time="2025-05-17T00:37:32.064507607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:37:32.066660 env[1842]: time="2025-05-17T00:37:32.066280735Z" level=info msg="CreateContainer within sandbox \"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:37:32.102722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716594556.mount: Deactivated successfully. May 17 00:37:32.120314 env[1842]: time="2025-05-17T00:37:32.120255750Z" level=info msg="CreateContainer within sandbox \"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0ef1a97033d811c97b5ad33969e060579e443afb72e145fa9b831737921df8cb\"" May 17 00:37:32.122742 env[1842]: time="2025-05-17T00:37:32.122702352Z" level=info msg="StartContainer for \"0ef1a97033d811c97b5ad33969e060579e443afb72e145fa9b831737921df8cb\"" May 17 00:37:32.199321 env[1842]: time="2025-05-17T00:37:32.199214477Z" level=info msg="StartContainer for \"0ef1a97033d811c97b5ad33969e060579e443afb72e145fa9b831737921df8cb\" returns successfully" May 17 00:37:33.503850 kubelet[2785]: I0517 00:37:33.498333 2785 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:37:33.605342 kubelet[2785]: I0517 00:37:33.605295 2785 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:37:34.296690 systemd[1]: Started sshd@7-172.31.26.143:22-139.178.68.195:60194.service. May 17 00:37:34.303218 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:37:34.303320 kernel: audit: type=1130 audit(1747442254.298:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.143:22-139.178.68.195:60194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:34.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.143:22-139.178.68.195:60194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:34.593000 audit[5450]: USER_ACCT pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.630422 kernel: audit: type=1101 audit(1747442254.593:423): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.630551 kernel: audit: type=1103 audit(1747442254.604:424): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.631532 kernel: audit: type=1006 audit(1747442254.608:425): pid=5450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 17 00:37:34.631620 kernel: audit: type=1300 audit(1747442254.608:425): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc48b5fda0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:34.631670 kernel: audit: type=1327 audit(1747442254.608:425): proctitle=737368643A20636F7265205B707269765D May 17 00:37:34.604000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.608000 audit[5450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc48b5fda0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:34.608000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:34.621397 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:34.638507 sshd[5450]: Accepted publickey for core from 139.178.68.195 port 60194 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:34.656874 systemd[1]: Started session-8.scope. May 17 00:37:34.658494 systemd-logind[1830]: New session 8 of user core. May 17 00:37:34.666000 audit[5450]: USER_START pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.676015 kernel: audit: type=1105 audit(1747442254.666:426): pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.676137 kernel: audit: type=1103 audit(1747442254.666:427): pid=5453 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:34.666000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:35.273969 env[1842]: time="2025-05-17T00:37:35.273885914Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.278491 env[1842]: time="2025-05-17T00:37:35.278437953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.282196 env[1842]: time="2025-05-17T00:37:35.281198678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.282954 env[1842]: time="2025-05-17T00:37:35.282915799Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.283746 env[1842]: time="2025-05-17T00:37:35.283710093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:37:35.556768 env[1842]: time="2025-05-17T00:37:35.556726091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:37:35.615121 env[1842]: time="2025-05-17T00:37:35.615072677Z" level=info msg="CreateContainer within sandbox \"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:37:35.635024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065489513.mount: Deactivated successfully. May 17 00:37:35.640033 env[1842]: time="2025-05-17T00:37:35.639958310Z" level=info msg="CreateContainer within sandbox \"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4fc6afadc9679c39011492121b9d592a18ac2a59664ed7c37afa620f6828887a\"" May 17 00:37:35.678908 env[1842]: time="2025-05-17T00:37:35.678542792Z" level=info msg="StartContainer for \"4fc6afadc9679c39011492121b9d592a18ac2a59664ed7c37afa620f6828887a\"" May 17 00:37:35.827305 env[1842]: time="2025-05-17T00:37:35.827179035Z" level=info msg="StartContainer for \"4fc6afadc9679c39011492121b9d592a18ac2a59664ed7c37afa620f6828887a\" returns successfully" May 17 00:37:35.926159 env[1842]: time="2025-05-17T00:37:35.923781081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.931984 env[1842]: time="2025-05-17T00:37:35.931931068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.943912 env[1842]: time="2025-05-17T00:37:35.943863105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.948978 env[1842]: time="2025-05-17T00:37:35.948926553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:35.951993 env[1842]: time="2025-05-17T00:37:35.951939481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:37:35.989948 env[1842]: time="2025-05-17T00:37:35.989732066Z" level=info msg="CreateContainer within sandbox \"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:37:36.029394 env[1842]: time="2025-05-17T00:37:36.029213858Z" level=info msg="CreateContainer within sandbox \"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ed430a19d01273ac152952066773caa8bc1c6cc86d1b7e31dfffbe85cac144f\"" May 17 00:37:36.031664 env[1842]: time="2025-05-17T00:37:36.030367908Z" level=info msg="StartContainer for \"0ed430a19d01273ac152952066773caa8bc1c6cc86d1b7e31dfffbe85cac144f\"" May 17 00:37:36.219465 env[1842]: time="2025-05-17T00:37:36.218639151Z" level=info msg="StartContainer for \"0ed430a19d01273ac152952066773caa8bc1c6cc86d1b7e31dfffbe85cac144f\" returns successfully" May 17 00:37:36.563367 sshd[5450]: pam_unix(sshd:session): session closed for user core May 17 00:37:36.564000 audit[5450]: USER_END pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:36.567772 systemd[1]: sshd@7-172.31.26.143:22-139.178.68.195:60194.service: Deactivated successfully. May 17 00:37:36.569026 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:37:36.573858 kernel: audit: type=1106 audit(1747442256.564:428): pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:36.574009 systemd-logind[1830]: Session 8 logged out. Waiting for processes to exit. May 17 00:37:36.575152 systemd-logind[1830]: Removed session 8. May 17 00:37:36.564000 audit[5450]: CRED_DISP pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:36.591251 kernel: audit: type=1104 audit(1747442256.564:429): pid=5450 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:36.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.143:22-139.178.68.195:60194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:36.638527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202017443.mount: Deactivated successfully. May 17 00:37:37.109986 kubelet[2785]: I0517 00:37:37.060819 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z72kr" podStartSLOduration=32.553146335 podStartE2EDuration="39.027973089s" podCreationTimestamp="2025-05-17 00:36:58 +0000 UTC" firstStartedPulling="2025-05-17 00:37:25.587596561 +0000 UTC m=+46.810985298" lastFinishedPulling="2025-05-17 00:37:32.062423325 +0000 UTC m=+53.285812052" observedRunningTime="2025-05-17 00:37:32.570459719 +0000 UTC m=+53.793848465" watchObservedRunningTime="2025-05-17 00:37:37.027973089 +0000 UTC m=+58.251361833" May 17 00:37:37.116672 kubelet[2785]: I0517 00:37:37.110763 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74446499d9-stgnw" podStartSLOduration=35.966103584 podStartE2EDuration="43.110738104s" podCreationTimestamp="2025-05-17 00:36:54 +0000 UTC" firstStartedPulling="2025-05-17 00:37:28.82044721 +0000 UTC m=+50.043835945" lastFinishedPulling="2025-05-17 00:37:35.965081737 +0000 UTC m=+57.188470465" observedRunningTime="2025-05-17 00:37:37.017072775 +0000 UTC m=+58.240461522" watchObservedRunningTime="2025-05-17 00:37:37.110738104 +0000 UTC m=+58.334126850" May 17 00:37:37.117004 kubelet[2785]: I0517 00:37:37.116947 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74446499d9-7jqmj" podStartSLOduration=35.528757065 podStartE2EDuration="43.116920118s" podCreationTimestamp="2025-05-17 00:36:54 +0000 UTC" firstStartedPulling="2025-05-17 00:37:27.964715629 +0000 UTC m=+49.188104366" lastFinishedPulling="2025-05-17 00:37:35.552878683 +0000 UTC m=+56.776267419" observedRunningTime="2025-05-17 00:37:37.11055175 +0000 UTC m=+58.333940508" watchObservedRunningTime="2025-05-17 00:37:37.116920118 +0000 UTC m=+58.340308865" May 17 00:37:37.142000 audit[5542]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=5542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:37.142000 audit[5542]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe00723770 a2=0 a3=7ffe0072375c items=0 ppid=2892 pid=5542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:37.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:37.148000 audit[5542]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=5542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:37.148000 audit[5542]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe00723770 a2=0 a3=7ffe0072375c items=0 ppid=2892 pid=5542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:37.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:37.173000 audit[5544]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:37.173000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6969a4d0 a2=0 a3=7ffc6969a4bc items=0 ppid=2892 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:37.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:37.180000 audit[5544]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:37.180000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc6969a4d0 a2=0 a3=7ffc6969a4bc items=0 ppid=2892 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:37.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:37.890549 kubelet[2785]: I0517 00:37:37.890508 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:37:38.073667 env[1842]: time="2025-05-17T00:37:38.073348584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:37:38.265916 env[1842]: time="2025-05-17T00:37:38.265733307Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:38.266897 env[1842]: time="2025-05-17T00:37:38.266770672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:38.267065 kubelet[2785]: E0517 00:37:38.267028 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:37:38.267384 kubelet[2785]: E0517 00:37:38.267085 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:37:38.291536 kubelet[2785]: E0517 00:37:38.291435 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:38.294895 kubelet[2785]: E0517 00:37:38.294805 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:38.728000 audit[5548]: NETFILTER_CFG table=filter:129 family=2 entries=13 op=nft_register_rule pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:38.728000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffccc8652a0 a2=0 a3=7ffccc86528c items=0 ppid=2892 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:38.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:38.733000 audit[5548]: NETFILTER_CFG table=nat:130 family=2 entries=27 op=nft_register_chain pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:37:38.733000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffccc8652a0 a2=0 a3=7ffccc86528c items=0 ppid=2892 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:38.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:37:39.122169 env[1842]: time="2025-05-17T00:37:39.121895764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:37:39.139269 env[1842]: time="2025-05-17T00:37:39.139220503Z" level=info msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" May 17 00:37:39.291020 env[1842]: time="2025-05-17T00:37:39.290945459Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:39.292201 env[1842]: time="2025-05-17T00:37:39.292153973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:39.292925 kubelet[2785]: E0517 00:37:39.292860 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:37:39.293985 kubelet[2785]: E0517 00:37:39.292940 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:37:39.293985 kubelet[2785]: E0517 00:37:39.293054 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fecda21b58645b7beb0200da9036e4d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:39.297683 env[1842]: time="2025-05-17T00:37:39.297636092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:37:39.466102 env[1842]: time="2025-05-17T00:37:39.465984931Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:37:39.468873 env[1842]: time="2025-05-17T00:37:39.468786482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:37:39.469973 kubelet[2785]: E0517 00:37:39.469203 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:37:39.469973 kubelet[2785]: E0517 00:37:39.469259 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:37:39.469973 kubelet[2785]: E0517 00:37:39.469379 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:37:39.471453 kubelet[2785]: E0517 00:37:39.471267 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:39.641 [WARNING][5561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ad07685-9392-4209-bbe3-44ad549c6102", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa", Pod:"coredns-7c65d6cfc9-q95gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1509cc20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:39.647 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:39.647 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" iface="eth0" netns="" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:39.647 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:39.647 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.004 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.013 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.014 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.040 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.040 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.044 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.054636 env[1842]: 2025-05-17 00:37:40.048 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.056229 env[1842]: time="2025-05-17T00:37:40.054690201Z" level=info msg="TearDown network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" successfully" May 17 00:37:40.056229 env[1842]: time="2025-05-17T00:37:40.054731341Z" level=info msg="StopPodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" returns successfully" May 17 00:37:40.100182 env[1842]: time="2025-05-17T00:37:40.099550372Z" level=info msg="RemovePodSandbox for \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" May 17 00:37:40.100182 env[1842]: time="2025-05-17T00:37:40.099607366Z" level=info msg="Forcibly stopping sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\"" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.153 [WARNING][5583] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ad07685-9392-4209-bbe3-44ad549c6102", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"9766ebebecd73e90d8f585fd9fa2dcd8ace16acb03f483d5eab05b3c93a22caa", Pod:"coredns-7c65d6cfc9-q95gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1509cc20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.154 [INFO][5583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.154 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" iface="eth0" netns="" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.154 [INFO][5583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.154 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.189 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.189 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.189 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.196 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.196 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" HandleID="k8s-pod-network.6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--q95gq-eth0" May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.198 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.205425 env[1842]: 2025-05-17 00:37:40.201 [INFO][5583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927" May 17 00:37:40.207089 env[1842]: time="2025-05-17T00:37:40.205456693Z" level=info msg="TearDown network for sandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" successfully" May 17 00:37:40.215055 env[1842]: time="2025-05-17T00:37:40.215006345Z" level=info msg="RemovePodSandbox \"6b9acb55ae70b001c294d5f6eeebe0a01685512bda479b9710045a862e350927\" returns successfully" May 17 00:37:40.215620 env[1842]: time="2025-05-17T00:37:40.215587084Z" level=info msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.262 [WARNING][5604] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32dedf8-419f-45be-b1fc-aa56e4bf91b3", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a", Pod:"calico-apiserver-74446499d9-7jqmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42336864ccc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.263 [INFO][5604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.263 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" iface="eth0" netns="" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.263 [INFO][5604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.263 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.291 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.291 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.291 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.298 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.298 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.301 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.305635 env[1842]: 2025-05-17 00:37:40.303 [INFO][5604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.305635 env[1842]: time="2025-05-17T00:37:40.305545203Z" level=info msg="TearDown network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" successfully" May 17 00:37:40.305635 env[1842]: time="2025-05-17T00:37:40.305579238Z" level=info msg="StopPodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" returns successfully" May 17 00:37:40.307650 env[1842]: time="2025-05-17T00:37:40.307478848Z" level=info msg="RemovePodSandbox for \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" May 17 00:37:40.307650 env[1842]: time="2025-05-17T00:37:40.307517835Z" level=info msg="Forcibly stopping sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\"" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.352 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32dedf8-419f-45be-b1fc-aa56e4bf91b3", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"6d89fefc819b43bc944782fb8d686f6811c94fc380221cfae2431644d142527a", Pod:"calico-apiserver-74446499d9-7jqmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42336864ccc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.352 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.352 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" iface="eth0" netns="" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.352 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.352 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.377 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.377 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.377 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.386 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.386 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" HandleID="k8s-pod-network.9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--7jqmj-eth0" May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.388 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.404945 env[1842]: 2025-05-17 00:37:40.391 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3" May 17 00:37:40.404945 env[1842]: time="2025-05-17T00:37:40.394717669Z" level=info msg="TearDown network for sandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" successfully" May 17 00:37:40.407331 env[1842]: time="2025-05-17T00:37:40.406235336Z" level=info msg="RemovePodSandbox \"9404d73ead109634e873921309cd8c81fe81daa8df3e50b1ade9a2d36c8734a3\" returns successfully" May 17 00:37:40.411900 env[1842]: time="2025-05-17T00:37:40.408744215Z" level=info msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.510 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"d4a0e73a-3af9-4fe3-8741-031055e915ab", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709", Pod:"goldmane-8f77d7b6c-m25cz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe451daeac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.510 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.510 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" iface="eth0" netns="" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.510 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.510 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.591 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.591 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.592 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.601 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.601 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.603 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.608966 env[1842]: 2025-05-17 00:37:40.605 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.608966 env[1842]: time="2025-05-17T00:37:40.608022636Z" level=info msg="TearDown network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" successfully" May 17 00:37:40.608966 env[1842]: time="2025-05-17T00:37:40.608058083Z" level=info msg="StopPodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" returns successfully" May 17 00:37:40.609765 env[1842]: time="2025-05-17T00:37:40.609204539Z" level=info msg="RemovePodSandbox for \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" May 17 00:37:40.609765 env[1842]: time="2025-05-17T00:37:40.609243743Z" level=info msg="Forcibly stopping sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\"" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.666 [WARNING][5670] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"d4a0e73a-3af9-4fe3-8741-031055e915ab", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"3c7b8ed28c338bb38680748c0c19b3a534b0ffe8b0232112fc35bb88285d8709", Pod:"goldmane-8f77d7b6c-m25cz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe451daeac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.667 [INFO][5670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.667 [INFO][5670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" iface="eth0" netns="" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.667 [INFO][5670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.667 [INFO][5670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.724 [INFO][5677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.725 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.725 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.735 [WARNING][5677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.735 [INFO][5677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" HandleID="k8s-pod-network.392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" Workload="ip--172--31--26--143-k8s-goldmane--8f77d7b6c--m25cz-eth0" May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.737 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.741774 env[1842]: 2025-05-17 00:37:40.739 [INFO][5670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2" May 17 00:37:40.743510 env[1842]: time="2025-05-17T00:37:40.741799019Z" level=info msg="TearDown network for sandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" successfully" May 17 00:37:40.745518 env[1842]: time="2025-05-17T00:37:40.745429048Z" level=info msg="RemovePodSandbox \"392f07910545355e3064467f0c242664079a0a385581673940e997b146aff9f2\" returns successfully" May 17 00:37:40.746643 env[1842]: time="2025-05-17T00:37:40.746548173Z" level=info msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.801 [WARNING][5692] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.801 [INFO][5692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.801 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" iface="eth0" netns="" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.801 [INFO][5692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.801 [INFO][5692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.839 [INFO][5699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.840 [INFO][5699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.840 [INFO][5699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.846 [WARNING][5699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.846 [INFO][5699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.848 [INFO][5699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.852933 env[1842]: 2025-05-17 00:37:40.850 [INFO][5692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.853648 env[1842]: time="2025-05-17T00:37:40.853611881Z" level=info msg="TearDown network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" successfully" May 17 00:37:40.853736 env[1842]: time="2025-05-17T00:37:40.853713214Z" level=info msg="StopPodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" returns successfully" May 17 00:37:40.854498 env[1842]: time="2025-05-17T00:37:40.854467496Z" level=info msg="RemovePodSandbox for \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" May 17 00:37:40.854627 env[1842]: time="2025-05-17T00:37:40.854505756Z" level=info msg="Forcibly stopping sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\"" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.916 [WARNING][5714] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" WorkloadEndpoint="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.916 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.916 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" iface="eth0" netns="" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.916 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.917 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.960 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.960 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.960 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.967 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.967 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" HandleID="k8s-pod-network.b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" Workload="ip--172--31--26--143-k8s-whisker--5669ccd8b7--dh98d-eth0" May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.969 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:40.975733 env[1842]: 2025-05-17 00:37:40.971 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430" May 17 00:37:40.977742 env[1842]: time="2025-05-17T00:37:40.976033313Z" level=info msg="TearDown network for sandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" successfully" May 17 00:37:40.980239 env[1842]: time="2025-05-17T00:37:40.980193453Z" level=info msg="RemovePodSandbox \"b4c9861b6d81ac46996457d3fb5d9a979226f2052dd75482c3e8c10dc326a430\" returns successfully" May 17 00:37:40.980858 env[1842]: time="2025-05-17T00:37:40.980806230Z" level=info msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.023 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e86c1943-5287-43a1-8c5c-dfb67368014d", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a", Pod:"calico-apiserver-74446499d9-stgnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd0a3704fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.024 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.024 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" iface="eth0" netns="" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.024 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.024 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.056 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.057 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.057 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.063 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.063 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.065 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.071306 env[1842]: 2025-05-17 00:37:41.068 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.075022 env[1842]: time="2025-05-17T00:37:41.071348469Z" level=info msg="TearDown network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" successfully" May 17 00:37:41.075022 env[1842]: time="2025-05-17T00:37:41.071387940Z" level=info msg="StopPodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" returns successfully" May 17 00:37:41.075022 env[1842]: time="2025-05-17T00:37:41.072047525Z" level=info msg="RemovePodSandbox for \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" May 17 00:37:41.075022 env[1842]: time="2025-05-17T00:37:41.072083939Z" level=info msg="Forcibly stopping sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\"" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.125 [WARNING][5759] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0", GenerateName:"calico-apiserver-74446499d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e86c1943-5287-43a1-8c5c-dfb67368014d", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74446499d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"52c9d9034f845b0c566f635001c91e5942addd7a43bf1a44b3e7a9893913776a", Pod:"calico-apiserver-74446499d9-stgnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd0a3704fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.125 [INFO][5759] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.125 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" iface="eth0" netns="" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.125 [INFO][5759] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.125 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.155 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.155 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.155 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.165 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.165 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" HandleID="k8s-pod-network.9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" Workload="ip--172--31--26--143-k8s-calico--apiserver--74446499d9--stgnw-eth0" May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.168 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.173746 env[1842]: 2025-05-17 00:37:41.170 [INFO][5759] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3" May 17 00:37:41.207703 env[1842]: time="2025-05-17T00:37:41.174370728Z" level=info msg="TearDown network for sandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" successfully" May 17 00:37:41.207703 env[1842]: time="2025-05-17T00:37:41.179574009Z" level=info msg="RemovePodSandbox \"9d2872c90b4f3aa60e2f5f1a602126d89db6656f082af18efcd03377e34a91b3\" returns successfully" May 17 00:37:41.207703 env[1842]: time="2025-05-17T00:37:41.180236362Z" level=info msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.242 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d455070-45c6-475a-be66-925b4a2071bc", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09", Pod:"csi-node-driver-z72kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1551bc05b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.243 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.243 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" iface="eth0" netns="" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.243 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.243 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.285 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.285 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.285 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.294 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.294 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.297 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.306128 env[1842]: 2025-05-17 00:37:41.300 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.306128 env[1842]: time="2025-05-17T00:37:41.304799945Z" level=info msg="TearDown network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" successfully" May 17 00:37:41.306128 env[1842]: time="2025-05-17T00:37:41.304872949Z" level=info msg="StopPodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" returns successfully" May 17 00:37:41.306128 env[1842]: time="2025-05-17T00:37:41.305547548Z" level=info msg="RemovePodSandbox for \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" May 17 00:37:41.306128 env[1842]: time="2025-05-17T00:37:41.305605884Z" level=info msg="Forcibly stopping sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\"" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.374 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d455070-45c6-475a-be66-925b4a2071bc", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"2eb34a59a674a7557069cab33dc9e1188522eae0249ed3a78995c77dfbdfbc09", Pod:"csi-node-driver-z72kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1551bc05b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.375 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.375 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" iface="eth0" netns="" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.375 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.375 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.413 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.414 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.414 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.421 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.421 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" HandleID="k8s-pod-network.0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" Workload="ip--172--31--26--143-k8s-csi--node--driver--z72kr-eth0" May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.424 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.429243 env[1842]: 2025-05-17 00:37:41.426 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0" May 17 00:37:41.430969 env[1842]: time="2025-05-17T00:37:41.429287497Z" level=info msg="TearDown network for sandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" successfully" May 17 00:37:41.433980 env[1842]: time="2025-05-17T00:37:41.433763727Z" level=info msg="RemovePodSandbox \"0eb7a1056d42450e3d2116fba0571d6201358fd3b7488fea57ddbdf79a9523e0\" returns successfully" May 17 00:37:41.434420 env[1842]: time="2025-05-17T00:37:41.434368587Z" level=info msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.479 [WARNING][5827] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0", GenerateName:"calico-kube-controllers-7487d56f97-", Namespace:"calico-system", SelfLink:"", UID:"10bbdf63-9286-463c-aba2-5278ed8400da", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7487d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c", Pod:"calico-kube-controllers-7487d56f97-mnmbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf9601efd43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.480 [INFO][5827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.480 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" iface="eth0" netns="" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.480 [INFO][5827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.480 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.541 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.553 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.553 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.560 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.560 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.562 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.568961 env[1842]: 2025-05-17 00:37:41.565 [INFO][5827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.568961 env[1842]: time="2025-05-17T00:37:41.567781835Z" level=info msg="TearDown network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" successfully" May 17 00:37:41.568961 env[1842]: time="2025-05-17T00:37:41.567824960Z" level=info msg="StopPodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" returns successfully" May 17 00:37:41.568961 env[1842]: time="2025-05-17T00:37:41.568481284Z" level=info msg="RemovePodSandbox for \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" May 17 00:37:41.568961 env[1842]: time="2025-05-17T00:37:41.568522863Z" level=info msg="Forcibly stopping sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\"" May 17 00:37:41.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.143:22-139.178.68.195:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:41.596533 systemd[1]: Started sshd@8-172.31.26.143:22-139.178.68.195:60210.service. May 17 00:37:41.605092 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 00:37:41.606310 kernel: audit: type=1130 audit(1747442261.594:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.143:22-139.178.68.195:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.730 [WARNING][5850] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0", GenerateName:"calico-kube-controllers-7487d56f97-", Namespace:"calico-system", SelfLink:"", UID:"10bbdf63-9286-463c-aba2-5278ed8400da", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7487d56f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"0d880e54c01be7b4d941236a84ff500c07752882f2e66196251c4a44e3e41e1c", Pod:"calico-kube-controllers-7487d56f97-mnmbq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf9601efd43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.730 [INFO][5850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.730 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" iface="eth0" netns="" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.730 [INFO][5850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.731 [INFO][5850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.831 [INFO][5867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.832 [INFO][5867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.832 [INFO][5867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.838 [WARNING][5867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.838 [INFO][5867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" HandleID="k8s-pod-network.dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" Workload="ip--172--31--26--143-k8s-calico--kube--controllers--7487d56f97--mnmbq-eth0" May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.841 [INFO][5867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:41.846065 env[1842]: 2025-05-17 00:37:41.843 [INFO][5850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24" May 17 00:37:41.847841 env[1842]: time="2025-05-17T00:37:41.846433176Z" level=info msg="TearDown network for sandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" successfully" May 17 00:37:41.850808 env[1842]: time="2025-05-17T00:37:41.850755204Z" level=info msg="RemovePodSandbox \"dbda08c006ca2afddf8d7a2eb6630e40d97dc303badf1cbddf97cd9411e69b24\" returns successfully" May 17 00:37:41.851531 env[1842]: time="2025-05-17T00:37:41.851496549Z" level=info msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" May 17 00:37:41.943000 audit[5854]: USER_ACCT pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:41.983164 kernel: audit: type=1101 audit(1747442261.943:438): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:41.984100 kernel: audit: type=1103 audit(1747442261.952:439): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:41.984182 kernel: audit: type=1006 audit(1747442261.952:440): pid=5854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 17 00:37:41.984229 kernel: audit: type=1300 audit(1747442261.952:440): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf9875840 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:41.952000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:41.952000 audit[5854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf9875840 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:41.991424 kernel: audit: type=1327 audit(1747442261.952:440): proctitle=737368643A20636F7265205B707269765D May 17 00:37:41.952000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:41.957196 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:41.993688 sshd[5854]: Accepted publickey for core from 139.178.68.195 port 60210 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:42.006064 systemd[1]: Started session-9.scope. May 17 00:37:42.007324 systemd-logind[1830]: New session 9 of user core. May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.907 [WARNING][5891] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa", Pod:"coredns-7c65d6cfc9-pvwnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad068b2eb93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.907 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.907 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" iface="eth0" netns="" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.907 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.907 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.938 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.938 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.939 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.969 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.969 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.990 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:42.010098 env[1842]: 2025-05-17 00:37:41.996 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.011067 env[1842]: time="2025-05-17T00:37:42.011024088Z" level=info msg="TearDown network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" successfully" May 17 00:37:42.011261 env[1842]: time="2025-05-17T00:37:42.011206820Z" level=info msg="StopPodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" returns successfully" May 17 00:37:42.018221 env[1842]: time="2025-05-17T00:37:42.018175546Z" level=info msg="RemovePodSandbox for \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" May 17 00:37:42.018704 env[1842]: time="2025-05-17T00:37:42.018643594Z" level=info msg="Forcibly stopping sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\"" May 17 00:37:42.036330 kernel: audit: type=1105 audit(1747442262.019:441): pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:42.019000 audit[5854]: USER_START pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:42.068089 kernel: audit: type=1103 audit(1747442262.045:442): pid=5906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:42.045000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.154 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb244600-a9a6-4c3e-a26f-c49cbaa5a2ce", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-143", ContainerID:"93ff977aa0fc64c08e5c713b5199617c495d39bb4a421367df98807b546879aa", Pod:"coredns-7c65d6cfc9-pvwnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad068b2eb93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.155 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.155 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" iface="eth0" netns="" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.155 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.155 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.219 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.220 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.220 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.230 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.230 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" HandleID="k8s-pod-network.67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" Workload="ip--172--31--26--143-k8s-coredns--7c65d6cfc9--pvwnn-eth0" May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.233 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:37:42.240965 env[1842]: 2025-05-17 00:37:42.237 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef" May 17 00:37:42.240965 env[1842]: time="2025-05-17T00:37:42.240557686Z" level=info msg="TearDown network for sandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" successfully" May 17 00:37:42.247787 env[1842]: time="2025-05-17T00:37:42.247543819Z" level=info msg="RemovePodSandbox \"67534ab2d6cd3ceca07100614609c2a198cdb5305c14abfd4dbd1695494474ef\" returns successfully" May 17 00:37:42.767494 systemd[1]: run-containerd-runc-k8s.io-6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c-runc.Mxgdv8.mount: Deactivated successfully. May 17 00:37:43.323989 sshd[5854]: pam_unix(sshd:session): session closed for user core May 17 00:37:43.323000 audit[5854]: USER_END pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:43.328929 systemd-logind[1830]: Session 9 logged out. Waiting for processes to exit. May 17 00:37:43.331523 systemd[1]: sshd@8-172.31.26.143:22-139.178.68.195:60210.service: Deactivated successfully. May 17 00:37:43.332671 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:37:43.324000 audit[5854]: CRED_DISP pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:43.336671 systemd-logind[1830]: Removed session 9. May 17 00:37:43.343294 kernel: audit: type=1106 audit(1747442263.323:443): pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:43.343443 kernel: audit: type=1104 audit(1747442263.324:444): pid=5854 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:43.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.143:22-139.178.68.195:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:48.351826 systemd[1]: Started sshd@9-172.31.26.143:22-139.178.68.195:48844.service. May 17 00:37:48.363532 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:37:48.363721 kernel: audit: type=1130 audit(1747442268.352:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.143:22-139.178.68.195:48844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:48.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.143:22-139.178.68.195:48844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:48.624000 audit[5966]: USER_ACCT pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.627334 sshd[5966]: Accepted publickey for core from 139.178.68.195 port 48844 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:48.633867 kernel: audit: type=1101 audit(1747442268.624:447): pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.632000 audit[5966]: CRED_ACQ pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.642949 kernel: audit: type=1103 audit(1747442268.632:448): pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.643097 kernel: audit: type=1006 audit(1747442268.632:449): pid=5966 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 00:37:48.632000 audit[5966]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe279ba150 a2=3 a3=0 items=0 ppid=1 pid=5966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:48.655451 kernel: audit: type=1300 audit(1747442268.632:449): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe279ba150 a2=3 a3=0 items=0 ppid=1 pid=5966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:48.655556 kernel: audit: type=1327 audit(1747442268.632:449): proctitle=737368643A20636F7265205B707269765D May 17 00:37:48.632000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:48.646905 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:48.663917 systemd[1]: Started session-10.scope. May 17 00:37:48.665022 systemd-logind[1830]: New session 10 of user core. May 17 00:37:48.671000 audit[5966]: USER_START pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.683106 kernel: audit: type=1105 audit(1747442268.671:450): pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.682000 audit[5969]: CRED_ACQ pid=5969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:48.690942 kernel: audit: type=1103 audit(1747442268.682:451): pid=5969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.214473 sshd[5966]: pam_unix(sshd:session): session closed for user core May 17 00:37:49.215000 audit[5966]: USER_END pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.215000 audit[5966]: CRED_DISP pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.230300 kernel: audit: type=1106 audit(1747442269.215:452): pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.231110 kernel: audit: type=1104 audit(1747442269.215:453): pid=5966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.230678 systemd[1]: sshd@9-172.31.26.143:22-139.178.68.195:48844.service: Deactivated successfully. May 17 00:37:49.231615 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:37:49.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.143:22-139.178.68.195:48844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:49.233162 systemd-logind[1830]: Session 10 logged out. Waiting for processes to exit. May 17 00:37:49.237323 systemd[1]: Started sshd@10-172.31.26.143:22-139.178.68.195:48850.service. May 17 00:37:49.238210 systemd-logind[1830]: Removed session 10. May 17 00:37:49.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.143:22-139.178.68.195:48850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:49.414000 audit[5982]: USER_ACCT pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.415544 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 48850 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:49.416000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.416000 audit[5982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd61414ee0 a2=3 a3=0 items=0 ppid=1 pid=5982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:49.416000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:49.417586 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:49.423863 systemd-logind[1830]: New session 11 of user core. May 17 00:37:49.424518 systemd[1]: Started session-11.scope. May 17 00:37:49.430000 audit[5982]: USER_START pid=5982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.433000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.734954 sshd[5982]: pam_unix(sshd:session): session closed for user core May 17 00:37:49.736000 audit[5982]: USER_END pid=5982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.736000 audit[5982]: CRED_DISP pid=5982 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.143:22-139.178.68.195:48850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:49.739418 systemd[1]: sshd@10-172.31.26.143:22-139.178.68.195:48850.service: Deactivated successfully. May 17 00:37:49.741181 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:37:49.741543 systemd-logind[1830]: Session 11 logged out. Waiting for processes to exit. May 17 00:37:49.744685 systemd-logind[1830]: Removed session 11. May 17 00:37:49.761222 systemd[1]: Started sshd@11-172.31.26.143:22-139.178.68.195:48854.service. May 17 00:37:49.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.143:22-139.178.68.195:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:49.933000 audit[5993]: USER_ACCT pid=5993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.934596 sshd[5993]: Accepted publickey for core from 139.178.68.195 port 48854 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:49.934000 audit[5993]: CRED_ACQ pid=5993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.934000 audit[5993]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc07830b50 a2=3 a3=0 items=0 ppid=1 pid=5993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:49.934000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:49.936036 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:49.941937 systemd[1]: Started session-12.scope. May 17 00:37:49.942818 systemd-logind[1830]: New session 12 of user core. May 17 00:37:49.947000 audit[5993]: USER_START pid=5993 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:49.949000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:50.197509 sshd[5993]: pam_unix(sshd:session): session closed for user core May 17 00:37:50.198000 audit[5993]: USER_END pid=5993 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:50.198000 audit[5993]: CRED_DISP pid=5993 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:50.201195 systemd[1]: sshd@11-172.31.26.143:22-139.178.68.195:48854.service: Deactivated successfully. May 17 00:37:50.201933 systemd-logind[1830]: Session 12 logged out. Waiting for processes to exit. May 17 00:37:50.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.143:22-139.178.68.195:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:50.202102 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:37:50.203676 systemd-logind[1830]: Removed session 12. May 17 00:37:51.128599 kubelet[2785]: E0517 00:37:51.125155 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:37:53.073729 kubelet[2785]: E0517 00:37:53.073696 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:37:55.222106 systemd[1]: Started sshd@12-172.31.26.143:22-139.178.68.195:43798.service. May 17 00:37:55.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.143:22-139.178.68.195:43798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:55.224888 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:37:55.224984 kernel: audit: type=1130 audit(1747442275.222:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.143:22-139.178.68.195:43798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:55.435000 audit[6009]: USER_ACCT pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.437859 sshd[6009]: Accepted publickey for core from 139.178.68.195 port 43798 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:37:55.444975 kernel: audit: type=1101 audit(1747442275.435:474): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.445754 kernel: audit: type=1103 audit(1747442275.444:475): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.444000 audit[6009]: CRED_ACQ pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.447337 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:55.461089 kernel: audit: type=1006 audit(1747442275.444:476): pid=6009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 17 00:37:55.461217 kernel: audit: type=1300 audit(1747442275.444:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4131ccd0 a2=3 a3=0 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:55.444000 audit[6009]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4131ccd0 a2=3 a3=0 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:37:55.460395 systemd[1]: Started session-13.scope. May 17 00:37:55.462887 systemd-logind[1830]: New session 13 of user core. May 17 00:37:55.444000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:37:55.480059 kernel: audit: type=1327 audit(1747442275.444:476): proctitle=737368643A20636F7265205B707269765D May 17 00:37:55.470000 audit[6009]: USER_START pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.494903 kernel: audit: type=1105 audit(1747442275.470:477): pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.472000 audit[6012]: CRED_ACQ pid=6012 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.501877 kernel: audit: type=1103 audit(1747442275.472:478): pid=6012 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.693991 sshd[6009]: pam_unix(sshd:session): session closed for user core May 17 00:37:55.697000 audit[6009]: USER_END pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.707652 kernel: audit: type=1106 audit(1747442275.697:479): pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.707804 kernel: audit: type=1104 audit(1747442275.702:480): pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.702000 audit[6009]: CRED_DISP pid=6009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:37:55.705802 systemd[1]: sshd@12-172.31.26.143:22-139.178.68.195:43798.service: Deactivated successfully. May 17 00:37:55.707092 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:37:55.714646 systemd-logind[1830]: Session 13 logged out. Waiting for processes to exit. May 17 00:37:55.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.143:22-139.178.68.195:43798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:37:55.715995 systemd-logind[1830]: Removed session 13. May 17 00:38:00.727267 systemd[1]: Started sshd@13-172.31.26.143:22-139.178.68.195:43808.service. May 17 00:38:00.741990 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:38:00.743380 kernel: audit: type=1130 audit(1747442280.727:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.143:22-139.178.68.195:43808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:00.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.143:22-139.178.68.195:43808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:01.029000 audit[6022]: USER_ACCT pid=6022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.046517 kernel: audit: type=1101 audit(1747442281.029:483): pid=6022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.049037 sshd[6022]: Accepted publickey for core from 139.178.68.195 port 43808 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:01.049000 audit[6022]: CRED_ACQ pid=6022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.059419 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:01.069341 kernel: audit: type=1103 audit(1747442281.049:484): pid=6022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.069466 kernel: audit: type=1006 audit(1747442281.049:485): pid=6022 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 17 00:38:01.069502 kernel: audit: type=1300 audit(1747442281.049:485): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde30acd20 a2=3 a3=0 items=0 ppid=1 pid=6022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:01.049000 audit[6022]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde30acd20 a2=3 a3=0 items=0 ppid=1 pid=6022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:01.071627 systemd[1]: Started session-14.scope. May 17 00:38:01.073559 systemd-logind[1830]: New session 14 of user core. May 17 00:38:01.078221 kernel: audit: type=1327 audit(1747442281.049:485): proctitle=737368643A20636F7265205B707269765D May 17 00:38:01.049000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:01.086000 audit[6022]: USER_START pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.099289 kernel: audit: type=1105 audit(1747442281.086:486): pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.087000 audit[6025]: CRED_ACQ pid=6025 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:01.108850 kernel: audit: type=1103 audit(1747442281.087:487): pid=6025 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:02.099855 kubelet[2785]: I0517 00:38:02.097604 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:38:03.461402 sshd[6022]: pam_unix(sshd:session): session closed for user core May 17 00:38:03.487490 kernel: audit: type=1106 audit(1747442283.462:488): pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:03.462000 audit[6022]: USER_END pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:03.482754 systemd[1]: sshd@13-172.31.26.143:22-139.178.68.195:43808.service: Deactivated successfully. May 17 00:38:03.484199 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:38:03.486794 systemd-logind[1830]: Session 14 logged out. Waiting for processes to exit. May 17 00:38:03.495731 systemd-logind[1830]: Removed session 14. May 17 00:38:03.535901 kernel: audit: type=1104 audit(1747442283.462:489): pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:03.462000 audit[6022]: CRED_DISP pid=6022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:03.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.143:22-139.178.68.195:43808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:03.756000 audit[6042]: NETFILTER_CFG table=filter:131 family=2 entries=12 op=nft_register_rule pid=6042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:03.756000 audit[6042]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe94137860 a2=0 a3=7ffe9413784c items=0 ppid=2892 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:03.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:03.763000 audit[6042]: NETFILTER_CFG table=nat:132 family=2 entries=34 op=nft_register_chain pid=6042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:03.763000 audit[6042]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe94137860 a2=0 a3=7ffe9413784c items=0 ppid=2892 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:03.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:06.116803 env[1842]: time="2025-05-17T00:38:06.116329373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:38:06.321355 env[1842]: time="2025-05-17T00:38:06.321104886Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:38:06.323374 env[1842]: time="2025-05-17T00:38:06.323218209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:38:06.330660 kubelet[2785]: E0517 00:38:06.327274 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:38:06.331394 kubelet[2785]: E0517 00:38:06.331360 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:38:06.394933 kubelet[2785]: E0517 00:38:06.394761 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fecda21b58645b7beb0200da9036e4d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:38:06.400752 env[1842]: time="2025-05-17T00:38:06.397518678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:38:06.574328 env[1842]: time="2025-05-17T00:38:06.574229952Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:38:06.576952 env[1842]: time="2025-05-17T00:38:06.576878723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:38:06.577210 kubelet[2785]: E0517 00:38:06.577163 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:38:06.577303 kubelet[2785]: E0517 00:38:06.577230 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:38:06.577423 kubelet[2785]: E0517 00:38:06.577375 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:38:06.583246 kubelet[2785]: E0517 00:38:06.583171 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:38:08.074127 env[1842]: time="2025-05-17T00:38:08.073756955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:38:08.252348 env[1842]: time="2025-05-17T00:38:08.252252378Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:38:08.255086 env[1842]: time="2025-05-17T00:38:08.254961629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:38:08.255427 kubelet[2785]: E0517 00:38:08.255367 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:38:08.255807 kubelet[2785]: E0517 00:38:08.255439 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:38:08.255807 kubelet[2785]: E0517 00:38:08.255656 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:38:08.257269 kubelet[2785]: E0517 00:38:08.257222 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:38:08.484625 systemd[1]: Started sshd@14-172.31.26.143:22-139.178.68.195:56696.service. May 17 00:38:08.495402 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:38:08.496322 kernel: audit: type=1130 audit(1747442288.485:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.143:22-139.178.68.195:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:08.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.143:22-139.178.68.195:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:08.728000 audit[6043]: USER_ACCT pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.737959 kernel: audit: type=1101 audit(1747442288.728:494): pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.738701 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 56696 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:08.738000 audit[6043]: CRED_ACQ pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.749915 kernel: audit: type=1103 audit(1747442288.738:495): pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.750395 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:08.756950 kernel: audit: type=1006 audit(1747442288.738:496): pid=6043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:38:08.738000 audit[6043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff46df6e60 a2=3 a3=0 items=0 ppid=1 pid=6043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:08.771195 kernel: audit: type=1300 audit(1747442288.738:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff46df6e60 a2=3 a3=0 items=0 ppid=1 pid=6043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:08.771321 kernel: audit: type=1327 audit(1747442288.738:496): proctitle=737368643A20636F7265205B707269765D May 17 00:38:08.738000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:08.779804 systemd-logind[1830]: New session 15 of user core. May 17 00:38:08.781110 systemd[1]: Started session-15.scope. May 17 00:38:08.791000 audit[6043]: USER_START pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.803996 kernel: audit: type=1105 audit(1747442288.791:497): pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.802000 audit[6046]: CRED_ACQ pid=6046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:08.820908 kernel: audit: type=1103 audit(1747442288.802:498): pid=6046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:09.604359 sshd[6043]: pam_unix(sshd:session): session closed for user core May 17 00:38:09.619915 kernel: audit: type=1106 audit(1747442289.609:499): pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:09.609000 audit[6043]: USER_END pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:09.613053 systemd-logind[1830]: Session 15 logged out. Waiting for processes to exit. May 17 00:38:09.614789 systemd[1]: sshd@14-172.31.26.143:22-139.178.68.195:56696.service: Deactivated successfully. May 17 00:38:09.616005 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:38:09.617726 systemd-logind[1830]: Removed session 15. May 17 00:38:09.609000 audit[6043]: CRED_DISP pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:09.634860 kernel: audit: type=1104 audit(1747442289.609:500): pid=6043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:09.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.143:22-139.178.68.195:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:12.770308 systemd[1]: run-containerd-runc-k8s.io-6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c-runc.Do0TbT.mount: Deactivated successfully. May 17 00:38:14.634368 systemd[1]: Started sshd@15-172.31.26.143:22-139.178.68.195:52880.service. May 17 00:38:14.647496 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:38:14.685753 kernel: audit: type=1130 audit(1747442294.633:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.143:22-139.178.68.195:52880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:14.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.143:22-139.178.68.195:52880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:14.906000 audit[6098]: USER_ACCT pid=6098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.916239 sshd[6098]: Accepted publickey for core from 139.178.68.195 port 52880 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:14.917516 kernel: audit: type=1101 audit(1747442294.906:503): pid=6098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.918920 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:14.915000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.929867 kernel: audit: type=1103 audit(1747442294.915:504): pid=6098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.934987 kernel: audit: type=1006 audit(1747442294.915:505): pid=6098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 00:38:14.915000 audit[6098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd5cf58e0 a2=3 a3=0 items=0 ppid=1 pid=6098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:14.951267 kernel: audit: type=1300 audit(1747442294.915:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd5cf58e0 a2=3 a3=0 items=0 ppid=1 pid=6098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:14.956577 kernel: audit: type=1327 audit(1747442294.915:505): proctitle=737368643A20636F7265205B707269765D May 17 00:38:14.915000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:14.955891 systemd-logind[1830]: New session 16 of user core. May 17 00:38:14.958529 systemd[1]: Started session-16.scope. May 17 00:38:14.967000 audit[6098]: USER_START pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.970000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.986187 kernel: audit: type=1105 audit(1747442294.967:506): pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:14.986481 kernel: audit: type=1103 audit(1747442294.970:507): pid=6103 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:15.816162 sshd[6098]: pam_unix(sshd:session): session closed for user core May 17 00:38:15.817000 audit[6098]: USER_END pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:15.825865 kernel: audit: type=1106 audit(1747442295.817:508): pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:15.828044 systemd-logind[1830]: Session 16 logged out. Waiting for processes to exit. May 17 00:38:15.829865 systemd[1]: sshd@15-172.31.26.143:22-139.178.68.195:52880.service: Deactivated successfully. May 17 00:38:15.830709 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:38:15.832504 systemd-logind[1830]: Removed session 16. May 17 00:38:15.840926 kernel: audit: type=1104 audit(1747442295.824:509): pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:15.824000 audit[6098]: CRED_DISP pid=6098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:15.847362 systemd[1]: Started sshd@16-172.31.26.143:22-139.178.68.195:52884.service. May 17 00:38:15.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.143:22-139.178.68.195:52880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:15.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.143:22-139.178.68.195:52884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:16.032000 audit[6112]: USER_ACCT pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.035095 sshd[6112]: Accepted publickey for core from 139.178.68.195 port 52884 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:16.034000 audit[6112]: CRED_ACQ pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.034000 audit[6112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6a27860 a2=3 a3=0 items=0 ppid=1 pid=6112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:16.034000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:16.035737 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:16.042969 systemd[1]: Started session-17.scope. May 17 00:38:16.043598 systemd-logind[1830]: New session 17 of user core. May 17 00:38:16.054000 audit[6112]: USER_START pid=6112 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.056000 audit[6115]: CRED_ACQ pid=6115 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.904221 sshd[6112]: pam_unix(sshd:session): session closed for user core May 17 00:38:16.905000 audit[6112]: USER_END pid=6112 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.907000 audit[6112]: CRED_DISP pid=6112 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:16.910727 systemd-logind[1830]: Session 17 logged out. Waiting for processes to exit. May 17 00:38:16.911238 systemd[1]: sshd@16-172.31.26.143:22-139.178.68.195:52884.service: Deactivated successfully. May 17 00:38:16.912428 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:38:16.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.143:22-139.178.68.195:52884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:16.914602 systemd-logind[1830]: Removed session 17. May 17 00:38:16.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.143:22-139.178.68.195:52898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:16.927023 systemd[1]: Started sshd@17-172.31.26.143:22-139.178.68.195:52898.service. May 17 00:38:17.123000 audit[6123]: USER_ACCT pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:17.125295 sshd[6123]: Accepted publickey for core from 139.178.68.195 port 52898 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:17.125000 audit[6123]: CRED_ACQ pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:17.125000 audit[6123]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0d132d90 a2=3 a3=0 items=0 ppid=1 pid=6123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:17.125000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:17.127275 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:17.136263 systemd[1]: Started session-18.scope. May 17 00:38:17.136723 systemd-logind[1830]: New session 18 of user core. May 17 00:38:17.144000 audit[6123]: USER_START pid=6123 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:17.146000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:19.741115 kubelet[2785]: E0517 00:38:19.741037 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:38:20.073414 kubelet[2785]: E0517 00:38:20.073368 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:38:24.403326 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:38:24.623214 kernel: audit: type=1325 audit(1747442304.359:526): table=filter:133 family=2 entries=24 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:24.625182 kernel: audit: type=1300 audit(1747442304.359:526): arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffecd96bae0 a2=0 a3=7ffecd96bacc items=0 ppid=2892 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:24.628393 kernel: audit: type=1327 audit(1747442304.359:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:24.628474 kernel: audit: type=1325 audit(1747442304.370:527): table=nat:134 family=2 entries=22 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:24.628500 kernel: audit: type=1300 audit(1747442304.370:527): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffecd96bae0 a2=0 a3=0 items=0 ppid=2892 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:24.628525 kernel: audit: type=1327 audit(1747442304.370:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:24.359000 audit[6139]: NETFILTER_CFG table=filter:133 family=2 entries=24 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:24.359000 audit[6139]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffecd96bae0 a2=0 a3=7ffecd96bacc items=0 ppid=2892 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:24.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:24.370000 audit[6139]: NETFILTER_CFG table=nat:134 family=2 entries=22 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:24.370000 audit[6139]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffecd96bae0 a2=0 a3=0 items=0 ppid=2892 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:24.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:25.987163 sshd[6123]: pam_unix(sshd:session): session closed for user core May 17 00:38:26.205778 kernel: audit: type=1325 audit(1747442306.152:528): table=filter:135 family=2 entries=36 op=nft_register_rule pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:26.217286 kernel: audit: type=1300 audit(1747442306.152:528): arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7fff30d30670 a2=0 a3=7fff30d3065c items=0 ppid=2892 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:26.217366 kernel: audit: type=1327 audit(1747442306.152:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:26.217392 kernel: audit: type=1106 audit(1747442306.168:529): pid=6123 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.152000 audit[6141]: NETFILTER_CFG table=filter:135 family=2 entries=36 op=nft_register_rule pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:26.152000 audit[6141]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7fff30d30670 a2=0 a3=7fff30d3065c items=0 ppid=2892 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:26.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:26.168000 audit[6123]: USER_END pid=6123 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.169000 audit[6141]: NETFILTER_CFG table=nat:136 family=2 entries=22 op=nft_register_rule pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:26.169000 audit[6141]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff30d30670 a2=0 a3=0 items=0 ppid=2892 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:26.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:26.169000 audit[6123]: CRED_DISP pid=6123 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.143:22-139.178.68.195:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:26.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.143:22-139.178.68.195:52898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:26.221266 systemd[1]: Started sshd@18-172.31.26.143:22-139.178.68.195:44700.service. May 17 00:38:26.229278 systemd[1]: sshd@17-172.31.26.143:22-139.178.68.195:52898.service: Deactivated successfully. May 17 00:38:26.230568 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:38:26.231891 systemd-logind[1830]: Session 18 logged out. Waiting for processes to exit. May 17 00:38:26.233023 systemd-logind[1830]: Removed session 18. May 17 00:38:26.575000 audit[6142]: USER_ACCT pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.577735 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 44700 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:26.577000 audit[6142]: CRED_ACQ pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.577000 audit[6142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaf531220 a2=3 a3=0 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:26.577000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:26.582721 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:26.598487 systemd[1]: Started session-19.scope. May 17 00:38:26.600004 systemd-logind[1830]: New session 19 of user core. May 17 00:38:26.619000 audit[6142]: USER_START pid=6142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:26.622000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.284405 kubelet[2785]: E0517 00:38:31.284350 2785 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.783s" May 17 00:38:31.451336 kubelet[2785]: E0517 00:38:31.451277 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:38:31.619938 sshd[6142]: pam_unix(sshd:session): session closed for user core May 17 00:38:31.676493 systemd[1]: Started sshd@19-172.31.26.143:22-139.178.68.195:44704.service. May 17 00:38:31.685149 kernel: kauditd_printk_skb: 13 callbacks suppressed May 17 00:38:31.691441 kernel: audit: type=1130 audit(1747442311.675:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.143:22-139.178.68.195:44704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:31.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.143:22-139.178.68.195:44704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:31.690000 audit[6142]: USER_END pid=6142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.697840 systemd[1]: sshd@18-172.31.26.143:22-139.178.68.195:44700.service: Deactivated successfully. May 17 00:38:31.718213 kernel: audit: type=1106 audit(1747442311.690:540): pid=6142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.699192 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:38:31.715573 systemd-logind[1830]: Session 19 logged out. Waiting for processes to exit. May 17 00:38:31.693000 audit[6142]: CRED_DISP pid=6142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.728958 systemd-logind[1830]: Removed session 19. May 17 00:38:31.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.143:22-139.178.68.195:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:31.745333 kernel: audit: type=1104 audit(1747442311.693:541): pid=6142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.747041 kernel: audit: type=1131 audit(1747442311.697:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.143:22-139.178.68.195:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:31.975969 sshd[6156]: Accepted publickey for core from 139.178.68.195 port 44704 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:31.974000 audit[6156]: USER_ACCT pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.987864 kernel: audit: type=1101 audit(1747442311.974:543): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.986000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:31.996867 kernel: audit: type=1103 audit(1747442311.986:544): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:32.004009 kernel: audit: type=1006 audit(1747442311.986:545): pid=6156 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 17 00:38:31.986000 audit[6156]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff30d018d0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:32.015721 kernel: audit: type=1300 audit(1747442311.986:545): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff30d018d0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:32.014300 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:31.986000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:32.024037 kernel: audit: type=1327 audit(1747442311.986:545): proctitle=737368643A20636F7265205B707269765D May 17 00:38:32.030121 systemd-logind[1830]: New session 20 of user core. May 17 00:38:32.031482 systemd[1]: Started session-20.scope. May 17 00:38:32.042000 audit[6156]: USER_START pid=6156 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:32.054907 kernel: audit: type=1105 audit(1747442312.042:546): pid=6156 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:32.053000 audit[6161]: CRED_ACQ pid=6161 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:33.118798 sshd[6156]: pam_unix(sshd:session): session closed for user core May 17 00:38:33.119000 audit[6156]: USER_END pid=6156 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:33.120000 audit[6156]: CRED_DISP pid=6156 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:33.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.143:22-139.178.68.195:44704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:33.123570 systemd[1]: sshd@19-172.31.26.143:22-139.178.68.195:44704.service: Deactivated successfully. May 17 00:38:33.125316 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:38:33.126042 systemd-logind[1830]: Session 20 logged out. Waiting for processes to exit. May 17 00:38:33.129470 systemd-logind[1830]: Removed session 20. May 17 00:38:33.661000 audit[6191]: NETFILTER_CFG table=filter:137 family=2 entries=24 op=nft_register_rule pid=6191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:33.661000 audit[6191]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd21e294d0 a2=0 a3=7ffd21e294bc items=0 ppid=2892 pid=6191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:33.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:33.671000 audit[6191]: NETFILTER_CFG table=nat:138 family=2 entries=106 op=nft_register_chain pid=6191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:38:33.671000 audit[6191]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd21e294d0 a2=0 a3=7ffd21e294bc items=0 ppid=2892 pid=6191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:33.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:38:34.656519 kubelet[2785]: E0517 00:38:34.656444 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:38:38.173161 systemd[1]: Started sshd@20-172.31.26.143:22-139.178.68.195:32876.service. May 17 00:38:38.183749 kernel: kauditd_printk_skb: 10 callbacks suppressed May 17 00:38:38.189467 kernel: audit: type=1130 audit(1747442318.172:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.143:22-139.178.68.195:32876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:38.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.143:22-139.178.68.195:32876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:38.468000 audit[6193]: USER_ACCT pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.492111 kernel: audit: type=1101 audit(1747442318.468:554): pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.492220 kernel: audit: type=1103 audit(1747442318.471:555): pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.492258 kernel: audit: type=1006 audit(1747442318.471:556): pid=6193 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 17 00:38:38.501434 kernel: audit: type=1300 audit(1747442318.471:556): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc332b7d00 a2=3 a3=0 items=0 ppid=1 pid=6193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:38.471000 audit[6193]: CRED_ACQ pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.471000 audit[6193]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc332b7d00 a2=3 a3=0 items=0 ppid=1 pid=6193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:38.514006 kernel: audit: type=1327 audit(1747442318.471:556): proctitle=737368643A20636F7265205B707269765D May 17 00:38:38.471000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:38.514184 sshd[6193]: Accepted publickey for core from 139.178.68.195 port 32876 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:38.480123 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:38.522868 systemd[1]: Started session-21.scope. May 17 00:38:38.523919 systemd-logind[1830]: New session 21 of user core. May 17 00:38:38.536000 audit[6193]: USER_START pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.547854 kernel: audit: type=1105 audit(1747442318.536:557): pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.536000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:38.562005 kernel: audit: type=1103 audit(1747442318.536:558): pid=6196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:40.670155 sshd[6193]: pam_unix(sshd:session): session closed for user core May 17 00:38:40.688094 kernel: audit: type=1106 audit(1747442320.676:559): pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:40.689912 kernel: audit: type=1104 audit(1747442320.677:560): pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:40.676000 audit[6193]: USER_END pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:40.677000 audit[6193]: CRED_DISP pid=6193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:40.682850 systemd[1]: sshd@20-172.31.26.143:22-139.178.68.195:32876.service: Deactivated successfully. May 17 00:38:40.687045 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:38:40.687903 systemd-logind[1830]: Session 21 logged out. Waiting for processes to exit. May 17 00:38:40.690655 systemd-logind[1830]: Removed session 21. May 17 00:38:40.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.143:22-139.178.68.195:32876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:42.793010 systemd[1]: run-containerd-runc-k8s.io-6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c-runc.gxd0O8.mount: Deactivated successfully. May 17 00:38:45.711524 systemd[1]: Started sshd@21-172.31.26.143:22-139.178.68.195:51856.service. May 17 00:38:45.721573 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:38:45.724792 kernel: audit: type=1130 audit(1747442325.711:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.143:22-139.178.68.195:51856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:45.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.143:22-139.178.68.195:51856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:46.016000 audit[6254]: USER_ACCT pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.021417 sshd[6254]: Accepted publickey for core from 139.178.68.195 port 51856 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:46.043647 kernel: audit: type=1101 audit(1747442326.016:563): pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.043747 kernel: audit: type=1103 audit(1747442326.025:564): pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.043790 kernel: audit: type=1006 audit(1747442326.025:565): pid=6254 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 17 00:38:46.025000 audit[6254]: CRED_ACQ pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.043492 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:46.025000 audit[6254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4a225fc0 a2=3 a3=0 items=0 ppid=1 pid=6254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:46.058655 kernel: audit: type=1300 audit(1747442326.025:565): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4a225fc0 a2=3 a3=0 items=0 ppid=1 pid=6254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:46.025000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:46.067044 kernel: audit: type=1327 audit(1747442326.025:565): proctitle=737368643A20636F7265205B707269765D May 17 00:38:46.092617 systemd[1]: Started session-22.scope. May 17 00:38:46.093178 systemd-logind[1830]: New session 22 of user core. May 17 00:38:46.117110 kernel: audit: type=1105 audit(1747442326.105:566): pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.105000 audit[6254]: USER_START pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.118000 audit[6257]: CRED_ACQ pid=6257 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.130858 kernel: audit: type=1103 audit(1747442326.118:567): pid=6257 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:46.298922 kubelet[2785]: E0517 00:38:46.298793 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:38:47.079987 kubelet[2785]: E0517 00:38:47.079913 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:38:47.360649 sshd[6254]: pam_unix(sshd:session): session closed for user core May 17 00:38:47.362000 audit[6254]: USER_END pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:47.373076 kernel: audit: type=1106 audit(1747442327.362:568): pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:47.367736 systemd-logind[1830]: Session 22 logged out. Waiting for processes to exit. May 17 00:38:47.370253 systemd[1]: sshd@21-172.31.26.143:22-139.178.68.195:51856.service: Deactivated successfully. May 17 00:38:47.372282 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:38:47.374526 systemd-logind[1830]: Removed session 22. May 17 00:38:47.388154 kernel: audit: type=1104 audit(1747442327.363:569): pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:47.363000 audit[6254]: CRED_DISP pid=6254 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:47.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.143:22-139.178.68.195:51856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:52.386849 systemd[1]: Started sshd@22-172.31.26.143:22-139.178.68.195:51866.service. May 17 00:38:52.399460 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:38:52.399606 kernel: audit: type=1130 audit(1747442332.388:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.143:22-139.178.68.195:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:52.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.143:22-139.178.68.195:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:52.637000 audit[6276]: USER_ACCT pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.654865 kernel: audit: type=1101 audit(1747442332.637:572): pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.654996 kernel: audit: type=1103 audit(1747442332.646:573): pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.646000 audit[6276]: CRED_ACQ pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.655265 sshd[6276]: Accepted publickey for core from 139.178.68.195 port 51866 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:52.650765 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:52.665856 kernel: audit: type=1006 audit(1747442332.646:574): pid=6276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 00:38:52.646000 audit[6276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcc660df0 a2=3 a3=0 items=0 ppid=1 pid=6276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:52.646000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:52.679465 kernel: audit: type=1300 audit(1747442332.646:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcc660df0 a2=3 a3=0 items=0 ppid=1 pid=6276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:52.679612 kernel: audit: type=1327 audit(1747442332.646:574): proctitle=737368643A20636F7265205B707269765D May 17 00:38:52.684027 systemd-logind[1830]: New session 23 of user core. May 17 00:38:52.686121 systemd[1]: Started session-23.scope. May 17 00:38:52.692000 audit[6276]: USER_START pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.707595 kernel: audit: type=1105 audit(1747442332.692:575): pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.707758 kernel: audit: type=1103 audit(1747442332.694:576): pid=6279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:52.694000 audit[6279]: CRED_ACQ pid=6279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:53.573618 sshd[6276]: pam_unix(sshd:session): session closed for user core May 17 00:38:53.582000 audit[6276]: USER_END pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:53.592861 kernel: audit: type=1106 audit(1747442333.582:577): pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:53.592000 audit[6276]: CRED_DISP pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:53.602853 kernel: audit: type=1104 audit(1747442333.592:578): pid=6276 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:53.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.143:22-139.178.68.195:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:53.615599 systemd[1]: sshd@22-172.31.26.143:22-139.178.68.195:51866.service: Deactivated successfully. May 17 00:38:53.617071 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:38:53.619790 systemd-logind[1830]: Session 23 logged out. Waiting for processes to exit. May 17 00:38:53.621478 systemd-logind[1830]: Removed session 23. May 17 00:38:58.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.143:22-139.178.68.195:46322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.588959 systemd[1]: Started sshd@23-172.31.26.143:22-139.178.68.195:46322.service. May 17 00:38:58.592533 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:38:58.592601 kernel: audit: type=1130 audit(1747442338.588:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.143:22-139.178.68.195:46322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.842000 audit[6289]: USER_ACCT pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.850541 sshd[6289]: Accepted publickey for core from 139.178.68.195 port 46322 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:38:58.851753 kernel: audit: type=1101 audit(1747442338.842:581): pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.858818 kernel: audit: type=1103 audit(1747442338.851:582): pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.858922 kernel: audit: type=1006 audit(1747442338.851:583): pid=6289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 00:38:58.851000 audit[6289]: CRED_ACQ pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.856001 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:58.851000 audit[6289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebb36e100 a2=3 a3=0 items=0 ppid=1 pid=6289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:58.870579 kernel: audit: type=1300 audit(1747442338.851:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebb36e100 a2=3 a3=0 items=0 ppid=1 pid=6289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:38:58.870708 kernel: audit: type=1327 audit(1747442338.851:583): proctitle=737368643A20636F7265205B707269765D May 17 00:38:58.851000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:38:58.872178 systemd[1]: Started session-24.scope. May 17 00:38:58.873280 systemd-logind[1830]: New session 24 of user core. May 17 00:38:58.879000 audit[6289]: USER_START pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.879000 audit[6292]: CRED_ACQ pid=6292 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.896195 kernel: audit: type=1105 audit(1747442338.879:584): pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:58.896304 kernel: audit: type=1103 audit(1747442338.879:585): pid=6292 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:59.729195 sshd[6289]: pam_unix(sshd:session): session closed for user core May 17 00:38:59.729000 audit[6289]: USER_END pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:59.733061 systemd[1]: sshd@23-172.31.26.143:22-139.178.68.195:46322.service: Deactivated successfully. May 17 00:38:59.733993 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:38:59.740673 kernel: audit: type=1106 audit(1747442339.729:586): pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:59.742256 kernel: audit: type=1104 audit(1747442339.730:587): pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:59.730000 audit[6289]: CRED_DISP pid=6289 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:38:59.745337 systemd-logind[1830]: Session 24 logged out. Waiting for processes to exit. May 17 00:38:59.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.143:22-139.178.68.195:46322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.746749 systemd-logind[1830]: Removed session 24. May 17 00:39:01.262485 env[1842]: time="2025-05-17T00:39:01.262149988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:39:01.472968 env[1842]: time="2025-05-17T00:39:01.471263436Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:39:01.475569 env[1842]: time="2025-05-17T00:39:01.475157819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:39:01.533143 kubelet[2785]: E0517 00:39:01.496820 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:39:01.571390 kubelet[2785]: E0517 00:39:01.564173 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:39:01.572083 env[1842]: time="2025-05-17T00:39:01.571944484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:39:01.805608 kubelet[2785]: E0517 00:39:01.803350 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fecda21b58645b7beb0200da9036e4d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:39:02.611487 amazon-ssm-agent[1891]: 2025-05-17 00:39:02 INFO [HealthCheck] HealthCheck reporting agent health. May 17 00:39:03.106498 env[1842]: time="2025-05-17T00:39:03.106245395Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:39:03.147237 env[1842]: time="2025-05-17T00:39:03.116508435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:39:03.172223 kubelet[2785]: E0517 00:39:03.172154 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:39:03.193911 kubelet[2785]: E0517 00:39:03.172233 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:39:03.198910 kubelet[2785]: E0517 00:39:03.198799 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-m25cz_calico-system(d4a0e73a-3af9-4fe3-8741-031055e915ab): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:39:03.421338 env[1842]: time="2025-05-17T00:39:03.420241259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:39:03.470470 kubelet[2785]: E0517 00:39:03.429141 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:39:03.619017 env[1842]: time="2025-05-17T00:39:03.618714817Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:39:03.624108 env[1842]: time="2025-05-17T00:39:03.623529650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:39:03.624305 kubelet[2785]: E0517 00:39:03.623906 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:39:03.624305 kubelet[2785]: E0517 00:39:03.623964 2785 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:39:03.624305 kubelet[2785]: E0517 00:39:03.624106 2785 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbhwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6686775679-47m7l_calico-system(24aa9389-753f-4495-bc1f-dde18df8a4e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:39:03.628973 kubelet[2785]: E0517 00:39:03.626403 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:39:04.770025 systemd[1]: Started sshd@24-172.31.26.143:22-139.178.68.195:44378.service. May 17 00:39:04.786067 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:39:04.846586 kernel: audit: type=1130 audit(1747442344.773:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.143:22-139.178.68.195:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.143:22-139.178.68.195:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:05.059000 audit[6316]: USER_ACCT pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.064213 sshd[6316]: Accepted publickey for core from 139.178.68.195 port 44378 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:05.069027 kernel: audit: type=1101 audit(1747442345.059:590): pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.068000 audit[6316]: CRED_ACQ pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.075706 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:05.080876 kernel: audit: type=1103 audit(1747442345.068:591): pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.086998 kernel: audit: type=1006 audit(1747442345.068:592): pid=6316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 00:39:05.087142 kernel: audit: type=1300 audit(1747442345.068:592): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc15afa190 a2=3 a3=0 items=0 ppid=1 pid=6316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:05.068000 audit[6316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc15afa190 a2=3 a3=0 items=0 ppid=1 pid=6316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:05.068000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:39:05.098879 kernel: audit: type=1327 audit(1747442345.068:592): proctitle=737368643A20636F7265205B707269765D May 17 00:39:05.104352 systemd[1]: Started session-25.scope. May 17 00:39:05.105158 systemd-logind[1830]: New session 25 of user core. May 17 00:39:05.114000 audit[6316]: USER_START pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.125810 kernel: audit: type=1105 audit(1747442345.114:593): pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.127090 kernel: audit: type=1103 audit(1747442345.124:594): pid=6319 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:05.124000 audit[6319]: CRED_ACQ pid=6319 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:06.056575 sshd[6316]: pam_unix(sshd:session): session closed for user core May 17 00:39:06.058000 audit[6316]: USER_END pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:06.061209 systemd[1]: sshd@24-172.31.26.143:22-139.178.68.195:44378.service: Deactivated successfully. May 17 00:39:06.067933 kernel: audit: type=1106 audit(1747442346.058:595): pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:06.068021 kernel: audit: type=1104 audit(1747442346.058:596): pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:06.058000 audit[6316]: CRED_DISP pid=6316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 17 00:39:06.062151 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:39:06.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.143:22-139.178.68.195:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.073828 systemd-logind[1830]: Session 25 logged out. Waiting for processes to exit. May 17 00:39:06.076981 systemd-logind[1830]: Removed session 25. May 17 00:39:12.744978 systemd[1]: run-containerd-runc-k8s.io-6bf024fc83b024802c8b9fc4aa1a3b3e3ef5c240ac73965e8f66c07b2286fd0c-runc.cJGwvU.mount: Deactivated successfully. May 17 00:39:16.121354 kubelet[2785]: E0517 00:39:16.121262 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:39:18.076222 kubelet[2785]: E0517 00:39:18.076164 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:39:20.552310 env[1842]: time="2025-05-17T00:39:20.539908440Z" level=info msg="shim disconnected" id=9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf May 17 00:39:20.552310 env[1842]: time="2025-05-17T00:39:20.539971757Z" level=warning msg="cleaning up after shim disconnected" id=9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf namespace=k8s.io May 17 00:39:20.552310 env[1842]: time="2025-05-17T00:39:20.539985522Z" level=info msg="cleaning up dead shim" May 17 00:39:20.547793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf-rootfs.mount: Deactivated successfully. May 17 00:39:20.562451 env[1842]: time="2025-05-17T00:39:20.554637693Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6382 runtime=io.containerd.runc.v2\n" May 17 00:39:21.474820 kubelet[2785]: I0517 00:39:21.474757 2785 scope.go:117] "RemoveContainer" containerID="9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf" May 17 00:39:21.506870 env[1842]: time="2025-05-17T00:39:21.506808241Z" level=info msg="CreateContainer within sandbox \"b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:39:21.545488 env[1842]: time="2025-05-17T00:39:21.545414875Z" level=info msg="CreateContainer within sandbox \"b8091f333cd97518bca61fff043ce8cd497380700c98f3f11b62da2401b874dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93\"" May 17 00:39:21.546245 env[1842]: time="2025-05-17T00:39:21.546210380Z" level=info msg="StartContainer for \"c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93\"" May 17 00:39:21.631979 env[1842]: time="2025-05-17T00:39:21.631912208Z" level=info msg="StartContainer for \"c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93\" returns successfully" May 17 00:39:21.924843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8-rootfs.mount: Deactivated successfully. May 17 00:39:21.927812 env[1842]: time="2025-05-17T00:39:21.927741410Z" level=info msg="shim disconnected" id=7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8 May 17 00:39:21.927812 env[1842]: time="2025-05-17T00:39:21.927790756Z" level=warning msg="cleaning up after shim disconnected" id=7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8 namespace=k8s.io May 17 00:39:21.927812 env[1842]: time="2025-05-17T00:39:21.927800064Z" level=info msg="cleaning up dead shim" May 17 00:39:21.937430 env[1842]: time="2025-05-17T00:39:21.937371611Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6443 runtime=io.containerd.runc.v2\n" May 17 00:39:22.464403 kubelet[2785]: I0517 00:39:22.464368 2785 scope.go:117] "RemoveContainer" containerID="7aa8b9d9977475576c02a656c8986fb397f283ab717067d928a1e4831d6aefa8" May 17 00:39:22.467323 env[1842]: time="2025-05-17T00:39:22.467280832Z" level=info msg="CreateContainer within sandbox \"4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:39:22.496929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512784953.mount: Deactivated successfully. May 17 00:39:22.504258 env[1842]: time="2025-05-17T00:39:22.504184060Z" level=info msg="CreateContainer within sandbox \"4f9aa5543ea21278c1a7be9af2fdaf3d5c70823f63ae84b1808b28926b92bb4a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ecb019e67437c415b281d9ab385ca51d85de8c0484666b705f19766d88cb7870\"" May 17 00:39:22.505261 env[1842]: time="2025-05-17T00:39:22.505215026Z" level=info msg="StartContainer for \"ecb019e67437c415b281d9ab385ca51d85de8c0484666b705f19766d88cb7870\"" May 17 00:39:22.561182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864087073.mount: Deactivated successfully. May 17 00:39:22.600113 env[1842]: time="2025-05-17T00:39:22.600047957Z" level=info msg="StartContainer for \"ecb019e67437c415b281d9ab385ca51d85de8c0484666b705f19766d88cb7870\" returns successfully" May 17 00:39:23.945504 kubelet[2785]: E0517 00:39:23.945437 2785 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": context deadline exceeded" May 17 00:39:26.865895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2-rootfs.mount: Deactivated successfully. May 17 00:39:26.871025 env[1842]: time="2025-05-17T00:39:26.869285955Z" level=info msg="shim disconnected" id=4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2 May 17 00:39:26.871025 env[1842]: time="2025-05-17T00:39:26.869346302Z" level=warning msg="cleaning up after shim disconnected" id=4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2 namespace=k8s.io May 17 00:39:26.871025 env[1842]: time="2025-05-17T00:39:26.869357794Z" level=info msg="cleaning up dead shim" May 17 00:39:26.883205 env[1842]: time="2025-05-17T00:39:26.883149072Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6509 runtime=io.containerd.runc.v2\n" May 17 00:39:27.483351 kubelet[2785]: I0517 00:39:27.483313 2785 scope.go:117] "RemoveContainer" containerID="4babc7378c186cbe450d1b0e0dcf890f0005ff4a1d7a12de0bf5ac8038ba0ed2" May 17 00:39:27.486107 env[1842]: time="2025-05-17T00:39:27.486060568Z" level=info msg="CreateContainer within sandbox \"4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:39:27.515519 env[1842]: time="2025-05-17T00:39:27.515435028Z" level=info msg="CreateContainer within sandbox \"4770ad18ec1f77f183705f1e9e9bf3746fe7dada50f229fd921be8b807d43e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a99831d8ac080475df642872410ece0510e67ddda89d9c3ba9cf369fa930ebef\"" May 17 00:39:27.516257 env[1842]: time="2025-05-17T00:39:27.516126434Z" level=info msg="StartContainer for \"a99831d8ac080475df642872410ece0510e67ddda89d9c3ba9cf369fa930ebef\"" May 17 00:39:27.608159 env[1842]: time="2025-05-17T00:39:27.607982460Z" level=info msg="StartContainer for \"a99831d8ac080475df642872410ece0510e67ddda89d9c3ba9cf369fa930ebef\" returns successfully" May 17 00:39:29.072822 kubelet[2785]: E0517 00:39:29.072784 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-m25cz" podUID="d4a0e73a-3af9-4fe3-8741-031055e915ab" May 17 00:39:32.074301 kubelet[2785]: E0517 00:39:32.074233 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-6686775679-47m7l" podUID="24aa9389-753f-4495-bc1f-dde18df8a4e1" May 17 00:39:32.560956 systemd[1]: run-containerd-runc-k8s.io-6db51b63b6008adc061663f3366a94be41006043a16ac30596732114aa499e66-runc.Zsyy6S.mount: Deactivated successfully. May 17 00:39:33.957312 kubelet[2785]: E0517 00:39:33.957239 2785 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:39:34.075996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93-rootfs.mount: Deactivated successfully. May 17 00:39:34.097614 env[1842]: time="2025-05-17T00:39:34.097543136Z" level=info msg="shim disconnected" id=c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93 May 17 00:39:34.097614 env[1842]: time="2025-05-17T00:39:34.097614378Z" level=warning msg="cleaning up after shim disconnected" id=c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93 namespace=k8s.io May 17 00:39:34.097614 env[1842]: time="2025-05-17T00:39:34.097627858Z" level=info msg="cleaning up dead shim" May 17 00:39:34.110365 env[1842]: time="2025-05-17T00:39:34.110311171Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6593 runtime=io.containerd.runc.v2\n" May 17 00:39:34.506457 kubelet[2785]: I0517 00:39:34.506413 2785 scope.go:117] "RemoveContainer" containerID="9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf" May 17 00:39:34.525659 kubelet[2785]: I0517 00:39:34.525632 2785 scope.go:117] "RemoveContainer" containerID="c927e60ed5eafb9bcd81a651e1a6492aeaf34238499603498be961d5d8b57f93" May 17 00:39:34.526153 kubelet[2785]: E0517 00:39:34.526126 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7c5755cdcb-8xdsh_tigera-operator(73e8066c-f770-4b96-8e83-59c998e1f842)\"" pod="tigera-operator/tigera-operator-7c5755cdcb-8xdsh" podUID="73e8066c-f770-4b96-8e83-59c998e1f842" May 17 00:39:34.542524 env[1842]: time="2025-05-17T00:39:34.542468940Z" level=info msg="RemoveContainer for \"9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf\"" May 17 00:39:34.548573 env[1842]: time="2025-05-17T00:39:34.548525125Z" level=info msg="RemoveContainer for \"9c7bfa7456b156b10e70567ad8cb5702d9e281e8ed4886bf7463b9f21748f1bf\" returns successfully"